Skip navigation
Help

Applied mathematics

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
John Timmer


The D-Wave Two.

D-Wave

D-Wave's quantum optimizer has found a new customer in the form of a partnership created by Google, NASA, and a consortium of research universities. The group is forming what it's calling the Quantum Artificial Intelligence Lab and will locate the computer at NASA's Ames Research Center. Academics will get involved via the Universities Space Research Association.

Although the D-Wave Two isn't a true quantum computer in the sense the term is typically used, D-Wave's system uses quantum effects to solve computational problems in a way that can be faster than traditional computers. How much faster? We just covered some results that indicated a certain class of problems may be sped up by as much as 10,000 times. Those algorithms are typically used in what's termed machine learning. And machine learning gets mentioned several times in Google's announcement of the new hardware.

Machine learning is typically used to allow computers to classify features, like whether or not an e-mail is spam (to use Google's example) or whether or not an image contains a specific feature, like a cat. You simply feed a machine learning system enough known images with and without cats and it will identify features that are shared among the cat set. When you feed it unknown images, it can determine whether enough of those features are present and make an accurate guess as to whether there's a cat in it. In more serious applications machine learning has been used to identify patterns of brain activity that are associated with different visual inputs, like viewing different letters.

Read 1 remaining paragraphs | Comments

0
Your rating: None


Multi-Party Computation: From Theory to Practice

Google Tech Talk 1/8/13 Presented by Nigel P. Smart ABSTRACT Multi-Party Computation (MPC) allows, in theory, a set of parties to compute any function on their secret input without revealing anything bar the output of the function. For many years this has been a restricted to a theoretical tool in cryptography. However, in the past five years amazing strides have been made in turning theory into practice. In this talk I will present the latest, practical, protocol called SPDZ (Speedz), which achieves much of its performance advantage from the use of Fully Homomorphic Encryption as a sub-procedure. No prior knowledge of MPC will be assumed. Speaker Info University of Bristol, UK
From:
GoogleTechTalks
Views:
1464

20
ratings
Time:
54:29
More in
Science & Technology

0
Your rating: None

Image via vichie81

Recently, Omar Tawakol from BlueKai wrote a fascinating article positing that more data beats better algorithms. He argued that more data trumps a better algorithm, but better still is having an algorithm that augments your data with linkages and connections, in the end creating a more robust data asset.

At Rocket Fuel, we’re big believers in the power of algorithms. This is because data, no matter how rich or augmented, is still a mostly static representation of customer interest and intent. To use data in the traditional way for Web advertising, choosing whom to show ads on the basis of the specific data segments they may be in represents one very simple choice of algorithm. But there are many others that can be strategically applied to take advantage of specific opportunities in the market, like a sudden burst of relevant ad inventory or a sudden increase in competition for consumers in a particular data segment. The algorithms can react to the changing usefulness of data, such as data that indicates interest in a specific time-sensitive event that is now past. They can also take advantage of ephemeral data not tied to individual behavior in any long-term way, such as the time of day or the context in which the person is browsing.

So while the world of data is rich, and algorithms can extend those data assets even further, the use of that data can be even more interesting and challenging, requiring extremely clever algorithms that result in significant, measurable improvements in campaign performance. Very few of these performance improvements are attributable solely to the use of more data.

For the sake of illustration, imagine you want to marry someone who will help you produce tall, healthy children. You are sequentially presented with suitors whom you have to either marry, or reject forever. Let’s say you start with only being able to look at the suitor’s height, and your simple algorithm is to “marry the first person who is over six feet tall.” How can we improve on these results? Using the “more data” strategy, we could also look at how strong they are, and set a threshold for that. Alternatively, we could use the same data but improve the algorithm: “Measure the height of the first third of the people I see, and marry the next person who is taller than all of them.” This algorithm improvement has a good chance of delivering a better result than just using more data with a simple algorithm.

Choosing opportunities to show online advertising to consumers is very much like that example, except that we’re picking millions of “suitors” each day for each advertiser, out of tens of billions of opportunities. As with the marriage challenge, we find it is most valuable to make improvements to the algorithms to help us make real-time decisions that grow increasingly optimal with each campaign.

There’s yet another dimension not covered in Omar’s article: the speed of the algorithms and data access, and the capacity of the infrastructure on which they run. The provider you work with needs to be able to make more decisions, faster, than any other players in this space. Doing that calls for a huge investment in hardware and software improvements at all layers of the stack. These investments are in some ways orthogonal to Omar’s original question: they simultaneously help optimize the performance of the algorithms, and they ensure the ability to store and process massive amounts of data.

In short, if I were told I had to either give up all the third-party data I might use, or give up my use of algorithms, I would give up the data in a heartbeat. There is plenty of relevant data captured through the passive activity of consumers interacting with Web advertising — more than enough to drive great performance for the vast majority of clients.

Mark Torrance is CTO of Rocket Fuel, which provides artificial-intelligence advertising solutions.

0
Your rating: None

I hope this is alright to ask here. This was posed as extra credit for my algorithms class, with any and all resources allowed, including the internet.

The assignment is to give an example of a case where the First Fit Decreasing solution to the 1-dimensional bin packing problem is exactly 71/42 OPT. Some additional information was given to us. First, this is an undergrad algorithms class, and nobody in the graduate class was able to solve it, so there must be some required techniques that we have not covered in class. Second, we were given a hint: in the TA's office posted on the door, there is a photo of the door itself with two Eyes of Horus. Third, the TA proved this a few years ago, but it was never published.

Can anyone lead me in the right direction?

submitted by no_detection
[link] [1 comment]

0
Your rating: None

theodp writes "Microsoft's promotion of Julie Larson-Green to lead all Windows software and hardware engineering in the wake of Steven Sinofsky's resignation is reopening the question of what is the difference between Computer Science and Software Engineering. According to their bios on Microsoft's website, Sinofsky has a master's degree in computer science from the University of Massachusetts Amherst and an undergraduate degree with honors from Cornell University, while Larson-Green has a master's degree in software engineering from Seattle University and a bachelor's degree in business administration from Western Washington University. A comparison of the curricula at Sinofsky's and Larson-Green's alma maters shows there's a huge difference between UMass's MSCS program and Seattle U's MSE program. So, is one program inherently more compatible with Microsoft's new teamwork mantra?"


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None