Skip navigation

Analysis of algorithms

warning: Creating default object from empty value in /var/www/vhosts/ on line 33.

Image via vichie81

Recently, Omar Tawakol from BlueKai wrote a fascinating article positing that more data beats better algorithms. He argued that more data trumps a better algorithm, but better still is having an algorithm that augments your data with linkages and connections, in the end creating a more robust data asset.

At Rocket Fuel, we’re big believers in the power of algorithms. This is because data, no matter how rich or augmented, is still a mostly static representation of customer interest and intent. To use data in the traditional way for Web advertising, choosing whom to show ads on the basis of the specific data segments they may be in represents one very simple choice of algorithm. But there are many others that can be strategically applied to take advantage of specific opportunities in the market, like a sudden burst of relevant ad inventory or a sudden increase in competition for consumers in a particular data segment. The algorithms can react to the changing usefulness of data, such as data that indicates interest in a specific time-sensitive event that is now past. They can also take advantage of ephemeral data not tied to individual behavior in any long-term way, such as the time of day or the context in which the person is browsing.

So while the world of data is rich, and algorithms can extend those data assets even further, the use of that data can be even more interesting and challenging, requiring extremely clever algorithms that result in significant, measurable improvements in campaign performance. Very few of these performance improvements are attributable solely to the use of more data.

For the sake of illustration, imagine you want to marry someone who will help you produce tall, healthy children. You are sequentially presented with suitors whom you have to either marry, or reject forever. Let’s say you start with only being able to look at the suitor’s height, and your simple algorithm is to “marry the first person who is over six feet tall.” How can we improve on these results? Using the “more data” strategy, we could also look at how strong they are, and set a threshold for that. Alternatively, we could use the same data but improve the algorithm: “Measure the height of the first third of the people I see, and marry the next person who is taller than all of them.” This algorithm improvement has a good chance of delivering a better result than just using more data with a simple algorithm.

Choosing opportunities to show online advertising to consumers is very much like that example, except that we’re picking millions of “suitors” each day for each advertiser, out of tens of billions of opportunities. As with the marriage challenge, we find it is most valuable to make improvements to the algorithms to help us make real-time decisions that grow increasingly optimal with each campaign.

There’s yet another dimension not covered in Omar’s article: the speed of the algorithms and data access, and the capacity of the infrastructure on which they run. The provider you work with needs to be able to make more decisions, faster, than any other players in this space. Doing that calls for a huge investment in hardware and software improvements at all layers of the stack. These investments are in some ways orthogonal to Omar’s original question: they simultaneously help optimize the performance of the algorithms, and they ensure the ability to store and process massive amounts of data.

In short, if I were told I had to either give up all the third-party data I might use, or give up my use of algorithms, I would give up the data in a heartbeat. There is plenty of relevant data captured through the passive activity of consumers interacting with Web advertising — more than enough to drive great performance for the vast majority of clients.

Mark Torrance is CTO of Rocket Fuel, which provides artificial-intelligence advertising solutions.

Your rating: None

If you are reading this subreddit, you are probably familiar with asymptotic algorithmic complexity (the "big-O notation"). In my experience with this topic, an algorithm's time complexity is usually derived assuming a simple machine model:

  • any "elementary" operation on any data takes one unit of time
  • at most one operation can be done in a unit of time

I have read a little about algorithmic complexity of parallel algorithms where at most P operations per unit time are allowed (P = number of processors) and this seems to be a straightforward extension of the machine model above.

These machine models, however, ignore the latency of transferring the data to the processor. For example, a dot product of two arrays and linear search of a linked list are both O(N) when using the usual machine model. When data transfer latency is taken into account, I would say the dot product is still O(N), since the "names" (addresses) of the data are known well ahead of time they are used and thus any data transfer delays can be overlapped (i.e. pipelined). In a linked list traversal, however, the name of the datum for the next operation is unknown until after the previous operation completes; hence I would say the algorithmic complexity would be O(N \times T(N)), where T(n) is the transfer latency of an arbitrary element from a set of n data elements.

I think this (or similar) machine model can be useful, since data transfer latency is an important consideration for many problems with large input sets.

I realize that these machine models have probably been already proposed and studied and I would greatly appreciate any pointers in this direction.

EDIT: It turns out I was right: this idea has been proposed as early as 1987, and, in fact, following the citations, it seems like there's a lot of follow up work.

submitted by baddeed
[link] [17 comments]

Your rating: None

I just started reading a book on parallel programming, and I immediately began to wonder how it affects time complexity.

I understand that theory and programming are not the same, but time complexity seems to rely on the notion that the algorithm is executed serially—will the mathematics to describe algorithms need to change to accomodate this?

For example, a binary search algorithm could be written such that each element in the set could be assigned to CPU core (or something similar). Then, wouldn't the algorithm essentially be constant time, O(1), instead of O(log n).

Thanks for any insightful response.

submitted by rodneyfool
[link] [15 comments]

Your rating: None