Skip navigation
Help

UDP

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Jon Brodkin


Can Google's QUIC be faster than Mega Man's nemesis, Quick Man?

Josh Miller

Google, as is its wont, is always trying to make the World Wide Web go faster. To that end, Google in 2009 unveiled SPDY, a networking protocol that reduces latency and is now being built into HTTP 2.0. SPDY is now supported by Chrome, Firefox, Opera, and the upcoming Internet Explorer 11.

But SPDY isn't enough. Yesterday, Google released a boatload of information about its next protocol, one that could reshape how the Web routes traffic. QUIC—standing for Quick UDP Internet Connections—was created to reduce the number of round trips data makes as it traverses the Internet in order to load stuff into your browser.

Although it is still in its early stages, Google is going to start testing the protocol on a "small percentage" of Chrome users who use the development or canary versions of the browser—the experimental versions that often contain features not stable enough for everyone. QUIC has been built into these test versions of Chrome and into Google's servers. The client and server implementations are open source, just as Chromium is.

Read 11 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Todd Hoff

Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar.

To learn how it’s done we turn to Robert Graham, CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale.

Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane, which we shouldn’t do at all. If we were designing a kernel for handling one application per server we would design it very differently than for a multi-user kernel. 

Which is why he says the key is to understand:

  • The kernel isn’t the solution. The kernel is the problem.

Which means:

  • Don’t let the kernel do all the heavy lifting. Take packet handling, memory management, and processor scheduling out of the kernel and put it into the application, where it can be done efficiently. Let Linux handle the control plane and let the the application handle the data plane.

The result will be a system that can handle 10 million concurrent connections with 200 clock cycles for packet handling and 1400 hundred clock cycles for application logic. As a main memory access costs 300 clock cycles it’s key to design in way that minimizes code and cache misses.

With a data plane oriented system you can process 10 million packets per second. With a control plane oriented system you only get 1 million packets per second.

If this seems extreme keep in mind the old saying: scalability is specialization. To do something great you can’t outsource performance to the OS. You have to do it yourself.

Now, let’s learn how Robert creates a system capable of handling 10 million concurrent connections...

0
Your rating: None
Original author: 
Sean Gallagher

Aurich Lawson

A little more than a year ago, details emerged about an effort by some members of the hacktivist group Anonymous to build a new weapon to replace their aging denial-of-service arsenal. The new weapon would use the Internet's Domain Name Service as a force-multiplier to bring the servers of those who offended the group to their metaphorical knees. Around the same time, an alleged plan for an Anonymous operation, "Operation Global Blackout" (later dismissed by some security experts and Anonymous members as a "massive troll"), sought to use the DNS service against the very core of the Internet itself in protest against the Stop Online Piracy Act.

This week, an attack using the technique proposed for use in that attack tool and operation—both of which failed to materialize—was at the heart of an ongoing denial-of-service assault on Spamhaus, the anti-spam clearing house organization. And while it hasn't brought the Internet itself down, it has caused major slowdowns in the Internet's core networks.

DNS Amplification (or DNS Reflection) remains possible after years of security expert warnings. Its power is a testament to how hard it is to get organizations to make simple changes that would prevent even recognized threats. Some network providers have made tweaks that prevent botnets or "volunteer" systems within their networks to stage such attacks. But thanks to public cloud services, "bulletproof" hosting services, and other services that allow attackers to spawn and then reap hundreds of attacking systems, DNS amplification attacks can still be launched at the whim of a deep-pocketed attacker—like, for example, the cyber-criminals running the spam networks that Spamhaus tries to shut down.

Read 16 remaining paragraphs | Comments

0
Your rating: None