Skip navigation
Help

Machine learning

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.


Deep Learning of Representations

Google Tech Talk 11/13/2012 Presented by Yoshua Bengio ABSTRACT Yoshua Bengio will give an introduction to the area of Deep Learning, to which he has been one of the leading contributors. It is aimed at learning representations of data, at multiple levels of abstraction. Current machine learning algorithms are highly dependent on feature engineering (manual design of the representation fed as input to a learner), and it would be of high practical value to design algorithms that can do good feature learning. The ideal features are disentangling the unknown underlying factors that generated the data. It has been shown both through theoretical arguments and empirical studies that deep architectures can generalize better than too shallow ones. Since a 2006 breakthrough, a variety of learning algorithms have been proposed for deep learning and feature learning, mostly based on unsupervised learning of representations, often by stacking single-level learning algorithms. Several of these algorithms are based on probabilistic models but interesting challenges arise to handle the intractability of the likelihood itself, and alternatives to maximum likelihoods have been successfully explored, including criteria based on purely geometric intutions about manifolds and the concentration of probability mass that characterize many real-world learning tasks. Representation-learning algorithms are being applied to many tasks in computer vision, natural language processing, speech <b>...</b>
From:
GoogleTechTalks
Views:
2624

56
ratings
Time:
01:15:44
More in
Science & Technology

0
Your rating: None

Image via vichie81

Recently, Omar Tawakol from BlueKai wrote a fascinating article positing that more data beats better algorithms. He argued that more data trumps a better algorithm, but better still is having an algorithm that augments your data with linkages and connections, in the end creating a more robust data asset.

At Rocket Fuel, we’re big believers in the power of algorithms. This is because data, no matter how rich or augmented, is still a mostly static representation of customer interest and intent. To use data in the traditional way for Web advertising, choosing whom to show ads on the basis of the specific data segments they may be in represents one very simple choice of algorithm. But there are many others that can be strategically applied to take advantage of specific opportunities in the market, like a sudden burst of relevant ad inventory or a sudden increase in competition for consumers in a particular data segment. The algorithms can react to the changing usefulness of data, such as data that indicates interest in a specific time-sensitive event that is now past. They can also take advantage of ephemeral data not tied to individual behavior in any long-term way, such as the time of day or the context in which the person is browsing.

So while the world of data is rich, and algorithms can extend those data assets even further, the use of that data can be even more interesting and challenging, requiring extremely clever algorithms that result in significant, measurable improvements in campaign performance. Very few of these performance improvements are attributable solely to the use of more data.

For the sake of illustration, imagine you want to marry someone who will help you produce tall, healthy children. You are sequentially presented with suitors whom you have to either marry, or reject forever. Let’s say you start with only being able to look at the suitor’s height, and your simple algorithm is to “marry the first person who is over six feet tall.” How can we improve on these results? Using the “more data” strategy, we could also look at how strong they are, and set a threshold for that. Alternatively, we could use the same data but improve the algorithm: “Measure the height of the first third of the people I see, and marry the next person who is taller than all of them.” This algorithm improvement has a good chance of delivering a better result than just using more data with a simple algorithm.

Choosing opportunities to show online advertising to consumers is very much like that example, except that we’re picking millions of “suitors” each day for each advertiser, out of tens of billions of opportunities. As with the marriage challenge, we find it is most valuable to make improvements to the algorithms to help us make real-time decisions that grow increasingly optimal with each campaign.

There’s yet another dimension not covered in Omar’s article: the speed of the algorithms and data access, and the capacity of the infrastructure on which they run. The provider you work with needs to be able to make more decisions, faster, than any other players in this space. Doing that calls for a huge investment in hardware and software improvements at all layers of the stack. These investments are in some ways orthogonal to Omar’s original question: they simultaneously help optimize the performance of the algorithms, and they ensure the ability to store and process massive amounts of data.

In short, if I were told I had to either give up all the third-party data I might use, or give up my use of algorithms, I would give up the data in a heartbeat. There is plenty of relevant data captured through the passive activity of consumers interacting with Web advertising — more than enough to drive great performance for the vast majority of clients.

Mark Torrance is CTO of Rocket Fuel, which provides artificial-intelligence advertising solutions.

0
Your rating: None

Hi all,

I'm a soon to be college graduate with a math major, comp sci minor, and statistics minor. I am looking for something interesting and related to comp sci to learn this summer. I hope whatever I study to be very interesting, and also improve my programming ability and problem solving ability.

Here are my ideas so far

  1. Learn Haskell. I've never done anything functional, and I hear Haskell is interesting and makes you a better programmer.

  2. Learn C. Haven't really done any low-level stuff.

  3. Algorithms. I took an algorithms class, but it wasn't too rigorous.

  4. Machine learning 5. Natural language processing. (These seem interesting)

  5. Set theory and databases (My job next year will be working with databases)

I'd appreciate any input on what seems like the most interesting or what other suggestions you have. (Don't suggest Project Euler, I do that already).

Thanks!

Edit: Thank you everybody! I think I'm going to learn a functional language, and that functional language will be Scheme (or Racket), as I found sicp to be more awesome than the Haskell resources. In conjunction with this, I'll be continuing project euler, and picking up emacs. Thanks for the advice!

submitted by cslmdt
[link] [78 comments]

0
Your rating: None