Skip navigation
Help

File sharing networks

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

CloudFlare's CDN is based on Anycast, a standard defined in the Border Gateway Protocol—the routing protocol that's at the center of how the Internet directs traffic. Anycast is part of how BGP supports the multi-homing of IP addresses, in which multiple routers connect a network to the Internet; through the broadcasts of IP addresses available through a router, other routers determine the shortest path for network traffic to take to reach that destination.

Using Anycast means that CloudFlare makes the servers it fronts appear to be in many places, while only using one IP address. "If you do a traceroute to Metallica.com (a CloudFlare customer), depending on where you are in the world, you would hit a different data center," Prince said. "But you're getting back the same IP address."

That means that as CloudFlare adds more data centers, and those data centers advertise the IP addresses of the websites that are fronted by the service, the Internet's core routers automatically re-map the routes to the IP addresses of the sites. There's no need to do anything special with the Domain Name Service to handle load-balancing of network traffic to sites other than point the hostname for a site at CloudFlare's IP address. It also means that when a specific data center needs to be taken down for an upgrade or maintenance (or gets knocked offline for some other reason), the routes can be adjusted on the fly.

That makes it much harder for distributed denial of service attacks to go after servers behind CloudFlare's CDN network; if they're geographically widespread, the traffic they generate gets spread across all of CloudFlare's data centers—as long as the network connections at each site aren't overcome.

0
Your rating: None

Today, a large collection of Web hosting and service companies announced that they will support Railgun, a compression protocol for dynamic Web content. The list includes the content delivery network and Web security provider CloudFlare, cloud providers Amazon Web Services and Rackspace, and thirty of the world’s biggest Web hosting companies.

Railgun is said to make it possible to double the performance of websites served up through Cloudflare’s global network of data centers. The technology was largely developed in the open-source Go programming language launched by Google; it could significantly change the economics of hosting high-volume websites on Amazon Web Services and other cloud platforms because of the bandwidth savings it provides. It has already cut the bandwidth used by 4Chan and Imgur by half. “We've seen a ~50% reduction in backend transfer for our HTML pages (transfer between our servers and CloudFlare's),” said 4Chan’s Chris Poole in an e-mail exchange with Ars. “And pages definitely load a fair bit snappier when Railgun is enabled, since the roundtrip time for CloudFlare to fetch the page is dramatically reduced. We serve over half a billion pages per month (and billions of API hits), so that all adds up fairly quickly.”

Rapid cache updates

Like most CDNs, CloudFlare uses caching of static content at its data centers to help overcome the speed of light. But prepositioning content on a forward server typically hasn’t helped performance much for dynamic webpages and Web traffic such as AJAX requests and mobile app API calls, which have relatively little in the way of what’s considered static content. That has created a problem for Internet services because of the rise in traffic for mobile devices and dynamic websites.

Read 13 remaining paragraphs | Comments

0
Your rating: None

The inside of Equinix's co-location facility in San Jose—the home of CloudFlare's primary data center.

Photo: Peter McCollough/Wired.com

On August 22, CloudFlare, a content delivery network, turned on a brand new data center in Seoul, Korea—the last of ten new facilities started across four continents in a span of thirty days. The Seoul data center brought CloudFlare's number of data centers up to 23, nearly doubling the company's global reach—a significant feat in itself for a company of just 32 employees.

But there was something else relatively significant about the Seoul data center and the other 9 facilities set up this summer: despite the fact that the company owned every router and every server in their racks, and each had been configured with great care to handle the demands of CloudFlare's CDN and security services, no one from CloudFlare had ever set foot in them. All that came from CloudFlare directly was a six-page manual instructing facility managers and local suppliers on how to rack and plug in the boxes shipped to them.

"We have nobody stationed in Stockholm or Seoul or Sydney, or a lot of the places that we put these new data centers," CloudFlare CEO Matthew Prince told Ars. "In fact, no CloudFlare employees have stepped foot in half of the facilities where we've launched." The totally remote-controlled data center approach used by the company is one of the reasons that CloudFlare can afford to provide its services for free to most of its customers—and still make a 75 percent profit margin.

Read 24 remaining paragraphs | Comments

0
Your rating: None

PatPending writes with this excerpt from TorrentFreak:
"The RetroShare network allows people to create a private and encrypted file-sharing network. Users add friends by exchanging PGP certificates with people they trust. All the communication is encrypted using OpenSSL and files that are downloaded from strangers always go through a trusted friend. In other words, it's a true Darknet and virtually impossible to monitor by outsiders. RetroShare founder DrBob told us that while the software has been around since 2006, all of a sudden there's been a surge in downloads. 'The interest in RetroShare has massively shot up over the last two months,' he said."


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

rain cloudWe recently moved from Amazon on-demand “cloud” hosting to our own dedicated servers.  It took about three months to order and set up the new servers versus a few minutes to get servers on Amazon.  However, the new servers are 2.5X faster and so far, more reliable.

We love Amazon for fostering development and innovation.  Cloud computing systems are great at getting you new servers.  This helps a lot when you are trying to innovate because you can quickly get new servers for your new services. If you are in a phase of trying new things, cloud hosts will help you.

Cloud hosts also help a lot when you are testing.  It’s amazing how many servers it takes to run an Internet service.  You don’t just need production systems.  You need failover systems.  You need development systems.  You need staging/QA systems.  You will need a lot of servers, and you may need to go to a cloud host.

However, there are problems with cloud hosting that emerge if you need high data throughput.  The problems aren’t with the servers but instead, with storage and networking.  To see why, let’s look at how a cloud architecture differs from a local box architecture.  You can’t directly attach each storage location to the box that it servers.  You have to use network attached storage.

DEDICATED ARCHITECTURE:  Server Box -> bus or lan or SAN -> Storage

CLOUD ARCHITECTURE:  Server Box -> Mesh network -> Storage cluster with network replication

1) Underlying problem:  Big data, slow networks

Network attached storage becomes a problem because there is a fundamental mismatch between networking and storage.  Storage capacity almost doubles every year.  Networking speed grows by a factor of ten about every 10 years – 100 times lower.  The net result is that storage gets much bigger than network capacity, and it takes a really long time to copy data over a network.  I first heard this trend analyzed by John Landry, who called it “Landry’s law.”  In my experience, this problem has gotten to the point where even sneakernet (putting on sneakers and carrying data on big storage media) cannot save us because after you lace up your sneakers, you have to copy the data OVER A NETWORK to get it onto the storage media and then copy it again to get it off.  When we replicated the Assembla data to the new datacenter, we realized that it would be slower to do those two copies than to replicate over  the Internet, which is slower than sneakernet for long distance transport but only requires one local network copy.

2) Mesh network inconsistency

The Internet was designed as a hub and spoke network, and that part of it works great.  When you send a packet up from your spoke, it travels a predictable route through various hubs to its destination.  When you plug dedicated servers into the Internet, you plug a spoke into the hub, and it works in the traditional way.  The IP network inside a cloud datacenter is more of a “mesh.”  Packets can take a variety of routes between the servers and the storage.  The mesh component is vulnerable to both packet loss and capacity problems.  I can’t present any technical reason why this is true, but in our observation, it is true.  We have seen two different issues:

* Slowdowns and brownouts:  This is a problem at both Amazon and GoGrid, but it is easier to see at Amazon.  Their network, and consequently their storage, has variable performance, with slow periods that I call “brownouts.”

* Packet loss:  This is related to the capacity problems as routers will throw away packets when they are overloaded.  However, the source of the packet loss seems to be much harder to debug in a mesh network.  We see these problems on the GoGrid network, and their attempts to diagnose it are often ineffectual.

3) Replication stoppages

The second goal of cloud computing is to provide high availability. The first goal is to never lose data.  When there is a failure in the storage cluster, the first goal (don’t lose data) kicks in and stomps on the second goal (high availability).  Systems will stop accepting new data and make sure that old data gets replicated.  Network attached storage will typically start replicating data to a new node.  It may either refuse new data until it can be replicated reliably, or it will absorb all network capacity and block normal operation in the mesh.

Note that in a large complex systems, variations in both network speed and storage capacity will follow a power law distribution.  This happens "chaotically."  When the variation reaches a certain low level of performance, the system fails because of the replication problem. 

I think that we should be able to predict the rate of major failures by observing the smaller variations and extrapolating them with a power law.  Amazon had  a major outage in April 2011. Throughout the previous 18 months, they had performance brownouts, and I think the frequency of one could be predicted from the other.

CONCLUSION

So, if your application is storage intensive and high availability, you must either:

1) Design it so that lots of replication is running all of the time, and you can afford to lose access to any specific storage node.  This places limits on the speed that your application can absorb data because you need to reserve a big percentage of scarce network capacity for replication.  So, you will have only a small percentage of network capacity available to for absorbing external data.  However, it is the required architecture for very large systems.  It  works well if you have a high ratio of output to input, since output just uses the replicated data rather than adding to it.

If you try this replication strategy, you will need to deal with two engineering issues.  First, you will think through replication specifically for your application.  There are many new database architectures that make this tradeoff in various ways.  Each has strengths and weaknesses, so if you design a distributed system, you will probably end up using several of these new architectures.  Second, you will need to distribute across multiple mesh network locations. It's not enough just to have several places to get your data, in the same network neighborhood.  If there is a problem, the entire mesh will jam up.  Ask about this.

2) Use local storage

0
Your rating: None