Skip navigation
Help

Complexity

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

If you are reading this subreddit, you are probably familiar with asymptotic algorithmic complexity (the "big-O notation"). In my experience with this topic, an algorithm's time complexity is usually derived assuming a simple machine model:

  • any "elementary" operation on any data takes one unit of time
  • at most one operation can be done in a unit of time

I have read a little about algorithmic complexity of parallel algorithms where at most P operations per unit time are allowed (P = number of processors) and this seems to be a straightforward extension of the machine model above.

These machine models, however, ignore the latency of transferring the data to the processor. For example, a dot product of two arrays and linear search of a linked list are both O(N) when using the usual machine model. When data transfer latency is taken into account, I would say the dot product is still O(N), since the "names" (addresses) of the data are known well ahead of time they are used and thus any data transfer delays can be overlapped (i.e. pipelined). In a linked list traversal, however, the name of the datum for the next operation is unknown until after the previous operation completes; hence I would say the algorithmic complexity would be O(N \times T(N)), where T(n) is the transfer latency of an arbitrary element from a set of n data elements.

I think this (or similar) machine model can be useful, since data transfer latency is an important consideration for many problems with large input sets.

I realize that these machine models have probably been already proposed and studied and I would greatly appreciate any pointers in this direction.

EDIT: It turns out I was right: this idea has been proposed as early as 1987, and, in fact, following the citations, it seems like there's a lot of follow up work.

submitted by baddeed
[link] [17 comments]

0
Your rating: None

I just started reading a book on parallel programming, and I immediately began to wonder how it affects time complexity.

I understand that theory and programming are not the same, but time complexity seems to rely on the notion that the algorithm is executed serially—will the mathematics to describe algorithms need to change to accomodate this?

For example, a binary search algorithm could be written such that each element in the set could be assigned to CPU core (or something similar). Then, wouldn't the algorithm essentially be constant time, O(1), instead of O(log n).

Thanks for any insightful response.

submitted by rodneyfool
[link] [15 comments]

0
Your rating: None

78k

"Our ability to generate information now far exceeds our capacity to understand it. Finding patterns and making meaningful connections inside complex data networks has emerged as one of the biggest challenges of the twenty-first century. In recent years, designers, researchers, and scientists have begun employing an innovative mix of colors, symbols, graphics, algorithms, and interactivity to clarify, and often beautify, the clutter." And look! Even Jeremy Deller's been invited to the party continue

0
Your rating: None