Skip navigation
Help

Example

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Todd Hoff

This is a guest post by Yelp's Jim Blomo. Jim manages a growing data mining team that uses Hadoop, mrjob, and oddjob to process TBs of data. Before Yelp, he built infrastructure for startups and Amazon. Check out his upcoming talk at OSCON 2013 on Building a Cloud Culture at Yelp.

In Q1 2013, Yelp had 102 million unique visitors (source: Google Analytics) including approximately 10 million unique mobile devices using the Yelp app on a monthly average basis. Yelpers have written more than 39 million rich, local reviews, making Yelp the leading local guide on everything from boutiques and mechanics to restaurants and dentists. With respect to data, one of the most unique things about Yelp is the variety of data: reviews, user profiles, business descriptions, menus, check-ins, food photos... the list goes on.  We have many ways to deal data, but today I’ll focus on how we handle offline data processing and analytics.

In late 2009, Yelp investigated using Amazon’s Elastic MapReduce (EMR) as an alternative to an in-house cluster built from spare computers.  By mid 2010, we had moved production processing completely to EMR and turned off our Hadoop cluster.  Today we run over 500 jobs a day, from integration tests to advertising metrics.  We’ve learned a few lessons along the way that can hopefully benefit you as well.

Job Flow Pooling

0
Your rating: None
Original author: 
General Chicken

You know your product is doing well when most of your early blog posts deal with the status of the waiting list of hundreds of thousands of users eagerly waiting to download your product. That's the enviable position Mailbox, a free mobile email management app, found themselves early in their release cycle. 

Hasn't email been done already? Apparently not. Mailbox scaled to one million users in a paltry six weeks with a team of about 14 people. As of April they were delivering over 100 million messages per day.

How did they do it? Mailbox engineering lead, Sean Beausoleil, gave an informative interview on readwrite.com on how Mailbox planned to scale... 

0
Your rating: None
Original author: 
Todd Hoff

I'm not sure what I was expecting the stack GOV.UK used at launch to look like. Maybe some messenger owls and lots of cobwebs? But not so at all. So much not so I thought any organization looking at their own stack for ideas could learn something from the considered choices of others.

The diversity of technologies used was surprising. They use "at least five different programming languages, three separate database types, two versions of an operating system." Some may think of this as a weakness, but they think it a strength:

The reason we operate such a diverse ecosystem is that we are focused on solving real problems. Our first task is to understand the problem or need we are solving and then to choose the best tool for the job. If we restrict ourselves to moulding the need to the tools we already have, then we risk not solving the initial problem in the best way possible for the user. By restricting software diversity or enforcing rigid organisational standards on a project, there is a possibility of descending into a cargo cult, where we simply repeat the same patterns and mistakes in everything we make.

This "use the best tool no matter what" policy is outlined in a blog post Benefits of diversity. The only choice that wouldn't be found in a modern startup is the use of Skyscape as their cloud provider. I'm assuming this has to do with legal issues around data sovereignty as this is government site, but otherwise it's all straight out of standard modern web practice: monitoring, dashboards, continuous release, polyglot persistence, distributed source code control, etc. Good to see a government getting it.

What stack are they using? (it's a direct copy so feel free to read the original)

0
Your rating: None
Original author: 
Todd Hoff

Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar.

To learn how it’s done we turn to Robert Graham, CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale.

Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane, which we shouldn’t do at all. If we were designing a kernel for handling one application per server we would design it very differently than for a multi-user kernel. 

Which is why he says the key is to understand:

  • The kernel isn’t the solution. The kernel is the problem.

Which means:

  • Don’t let the kernel do all the heavy lifting. Take packet handling, memory management, and processor scheduling out of the kernel and put it into the application, where it can be done efficiently. Let Linux handle the control plane and let the the application handle the data plane.

The result will be a system that can handle 10 million concurrent connections with 200 clock cycles for packet handling and 1400 hundred clock cycles for application logic. As a main memory access costs 300 clock cycles it’s key to design in way that minimizes code and cache misses.

With a data plane oriented system you can process 10 million packets per second. With a control plane oriented system you only get 1 million packets per second.

If this seems extreme keep in mind the old saying: scalability is specialization. To do something great you can’t outsource performance to the OS. You have to do it yourself.

Now, let’s learn how Robert creates a system capable of handling 10 million concurrent connections...

0
Your rating: None