Skip navigation
Help

CloudFlare

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Jon Brodkin

Niall Kennedy

Todd Kuehnl has been a developer for nearly 20 years and says he's tried "pretty much every language under the sun."

But it was only recently that Kuehnl discovered Go, a programming language unveiled by Google almost four years ago. Go is still a new kid on the block, but for Kuehnl, the conversion was quick. Now he says "Go is definitely by far my favorite programming language to work in." Kuehnl admitted he is "kind of a fanboy."

I'm no expert in programming, but I talked to Kuehnl because I was curious what might draw experienced coders to switch from proven languages to a brand new one (albeit one co-invented by the famous Ken Thompson, creator of Unix and the B programming language). Google itself runs some of its back-end systems on Go, no surprise for a company that designs its own servers and much of the software (right down to the operating systems) that its employees use. But why would non-Google engineers go with Go?

Read 17 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Dan Goodin

Wikipedia

Coordinated attacks used to knock websites offline grew meaner and more powerful in the past three months, with an eight-fold increase in the average amount of junk traffic used to take sites down, according to a company that helps customers weather the so-called distributed denial-of-service campaigns.

The average amount of bandwidth used in DDoS attacks mushroomed to an astounding 48.25 gigabits per second in the first quarter, with peaks as high as 130 Gbps, according to Hollywood, Florida-based Prolexic. During the same period last year, bandwidth in the average attack was 6.1 Gbps and in the fourth quarter of last year it was 5.9 Gbps. The average duration of attacks also grew to 34.5 hours, compared with 28.5 hours last year and 32.2 hours during the fourth quarter of 2012. Earlier this month, Prolexic engineers saw an attack that exceeded 160 Gbps, and officials said they wouldn't be surprised if peaks break the 200 Gbps threshold by the end of June.

The spikes are brought on by new attack techniques that Ars first chronicled in October. Rather than using compromised PCs in homes and small offices to flood websites with torrents of traffic, attackers are relying on Web servers, which often have orders of magnitude more bandwidth at their disposal. As Ars reported last week, an ongoing attack on servers running the WordPress blogging application is actively seeking new recruits that can also be harnessed to form never-before-seen botnets to bring still more firepower.

Read 9 remaining paragraphs | Comments

0
Your rating: None

CloudFlare's CDN is based on Anycast, a standard defined in the Border Gateway Protocol—the routing protocol that's at the center of how the Internet directs traffic. Anycast is part of how BGP supports the multi-homing of IP addresses, in which multiple routers connect a network to the Internet; through the broadcasts of IP addresses available through a router, other routers determine the shortest path for network traffic to take to reach that destination.

Using Anycast means that CloudFlare makes the servers it fronts appear to be in many places, while only using one IP address. "If you do a traceroute to Metallica.com (a CloudFlare customer), depending on where you are in the world, you would hit a different data center," Prince said. "But you're getting back the same IP address."

That means that as CloudFlare adds more data centers, and those data centers advertise the IP addresses of the websites that are fronted by the service, the Internet's core routers automatically re-map the routes to the IP addresses of the sites. There's no need to do anything special with the Domain Name Service to handle load-balancing of network traffic to sites other than point the hostname for a site at CloudFlare's IP address. It also means that when a specific data center needs to be taken down for an upgrade or maintenance (or gets knocked offline for some other reason), the routes can be adjusted on the fly.

That makes it much harder for distributed denial of service attacks to go after servers behind CloudFlare's CDN network; if they're geographically widespread, the traffic they generate gets spread across all of CloudFlare's data centers—as long as the network connections at each site aren't overcome.

0
Your rating: None

Today, a large collection of Web hosting and service companies announced that they will support Railgun, a compression protocol for dynamic Web content. The list includes the content delivery network and Web security provider CloudFlare, cloud providers Amazon Web Services and Rackspace, and thirty of the world’s biggest Web hosting companies.

Railgun is said to make it possible to double the performance of websites served up through Cloudflare’s global network of data centers. The technology was largely developed in the open-source Go programming language launched by Google; it could significantly change the economics of hosting high-volume websites on Amazon Web Services and other cloud platforms because of the bandwidth savings it provides. It has already cut the bandwidth used by 4Chan and Imgur by half. “We've seen a ~50% reduction in backend transfer for our HTML pages (transfer between our servers and CloudFlare's),” said 4Chan’s Chris Poole in an e-mail exchange with Ars. “And pages definitely load a fair bit snappier when Railgun is enabled, since the roundtrip time for CloudFlare to fetch the page is dramatically reduced. We serve over half a billion pages per month (and billions of API hits), so that all adds up fairly quickly.”

Rapid cache updates

Like most CDNs, CloudFlare uses caching of static content at its data centers to help overcome the speed of light. But prepositioning content on a forward server typically hasn’t helped performance much for dynamic webpages and Web traffic such as AJAX requests and mobile app API calls, which have relatively little in the way of what’s considered static content. That has created a problem for Internet services because of the rise in traffic for mobile devices and dynamic websites.

Read 13 remaining paragraphs | Comments

0
Your rating: None

The inside of Equinix's co-location facility in San Jose—the home of CloudFlare's primary data center.

Photo: Peter McCollough/Wired.com

On August 22, CloudFlare, a content delivery network, turned on a brand new data center in Seoul, Korea—the last of ten new facilities started across four continents in a span of thirty days. The Seoul data center brought CloudFlare's number of data centers up to 23, nearly doubling the company's global reach—a significant feat in itself for a company of just 32 employees.

But there was something else relatively significant about the Seoul data center and the other 9 facilities set up this summer: despite the fact that the company owned every router and every server in their racks, and each had been configured with great care to handle the demands of CloudFlare's CDN and security services, no one from CloudFlare had ever set foot in them. All that came from CloudFlare directly was a six-page manual instructing facility managers and local suppliers on how to rack and plug in the boxes shipped to them.

"We have nobody stationed in Stockholm or Seoul or Sydney, or a lot of the places that we put these new data centers," CloudFlare CEO Matthew Prince told Ars. "In fact, no CloudFlare employees have stepped foot in half of the facilities where we've launched." The totally remote-controlled data center approach used by the company is one of the reasons that CloudFlare can afford to provide its services for free to most of its customers—and still make a 75 percent profit margin.

Read 24 remaining paragraphs | Comments

0
Your rating: None