Skip navigation
Help

Andy Singleton

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

rain cloudWe recently moved from Amazon on-demand “cloud” hosting to our own dedicated servers.  It took about three months to order and set up the new servers versus a few minutes to get servers on Amazon.  However, the new servers are 2.5X faster and so far, more reliable.

We love Amazon for fostering development and innovation.  Cloud computing systems are great at getting you new servers.  This helps a lot when you are trying to innovate because you can quickly get new servers for your new services. If you are in a phase of trying new things, cloud hosts will help you.

Cloud hosts also help a lot when you are testing.  It’s amazing how many servers it takes to run an Internet service.  You don’t just need production systems.  You need failover systems.  You need development systems.  You need staging/QA systems.  You will need a lot of servers, and you may need to go to a cloud host.

However, there are problems with cloud hosting that emerge if you need high data throughput.  The problems aren’t with the servers but instead, with storage and networking.  To see why, let’s look at how a cloud architecture differs from a local box architecture.  You can’t directly attach each storage location to the box that it servers.  You have to use network attached storage.

DEDICATED ARCHITECTURE:  Server Box -> bus or lan or SAN -> Storage

CLOUD ARCHITECTURE:  Server Box -> Mesh network -> Storage cluster with network replication

1) Underlying problem:  Big data, slow networks

Network attached storage becomes a problem because there is a fundamental mismatch between networking and storage.  Storage capacity almost doubles every year.  Networking speed grows by a factor of ten about every 10 years – 100 times lower.  The net result is that storage gets much bigger than network capacity, and it takes a really long time to copy data over a network.  I first heard this trend analyzed by John Landry, who called it “Landry’s law.”  In my experience, this problem has gotten to the point where even sneakernet (putting on sneakers and carrying data on big storage media) cannot save us because after you lace up your sneakers, you have to copy the data OVER A NETWORK to get it onto the storage media and then copy it again to get it off.  When we replicated the Assembla data to the new datacenter, we realized that it would be slower to do those two copies than to replicate over  the Internet, which is slower than sneakernet for long distance transport but only requires one local network copy.

2) Mesh network inconsistency

The Internet was designed as a hub and spoke network, and that part of it works great.  When you send a packet up from your spoke, it travels a predictable route through various hubs to its destination.  When you plug dedicated servers into the Internet, you plug a spoke into the hub, and it works in the traditional way.  The IP network inside a cloud datacenter is more of a “mesh.”  Packets can take a variety of routes between the servers and the storage.  The mesh component is vulnerable to both packet loss and capacity problems.  I can’t present any technical reason why this is true, but in our observation, it is true.  We have seen two different issues:

* Slowdowns and brownouts:  This is a problem at both Amazon and GoGrid, but it is easier to see at Amazon.  Their network, and consequently their storage, has variable performance, with slow periods that I call “brownouts.”

* Packet loss:  This is related to the capacity problems as routers will throw away packets when they are overloaded.  However, the source of the packet loss seems to be much harder to debug in a mesh network.  We see these problems on the GoGrid network, and their attempts to diagnose it are often ineffectual.

3) Replication stoppages

The second goal of cloud computing is to provide high availability. The first goal is to never lose data.  When there is a failure in the storage cluster, the first goal (don’t lose data) kicks in and stomps on the second goal (high availability).  Systems will stop accepting new data and make sure that old data gets replicated.  Network attached storage will typically start replicating data to a new node.  It may either refuse new data until it can be replicated reliably, or it will absorb all network capacity and block normal operation in the mesh.

Note that in a large complex systems, variations in both network speed and storage capacity will follow a power law distribution.  This happens "chaotically."  When the variation reaches a certain low level of performance, the system fails because of the replication problem. 

I think that we should be able to predict the rate of major failures by observing the smaller variations and extrapolating them with a power law.  Amazon had  a major outage in April 2011. Throughout the previous 18 months, they had performance brownouts, and I think the frequency of one could be predicted from the other.

CONCLUSION

So, if your application is storage intensive and high availability, you must either:

1) Design it so that lots of replication is running all of the time, and you can afford to lose access to any specific storage node.  This places limits on the speed that your application can absorb data because you need to reserve a big percentage of scarce network capacity for replication.  So, you will have only a small percentage of network capacity available to for absorbing external data.  However, it is the required architecture for very large systems.  It  works well if you have a high ratio of output to input, since output just uses the replicated data rather than adding to it.

If you try this replication strategy, you will need to deal with two engineering issues.  First, you will think through replication specifically for your application.  There are many new database architectures that make this tradeoff in various ways.  Each has strengths and weaknesses, so if you design a distributed system, you will probably end up using several of these new architectures.  Second, you will need to distribute across multiple mesh network locations. It's not enough just to have several places to get your data, in the same network neighborhood.  If there is a problem, the entire mesh will jam up.  Ask about this.

2) Use local storage

0
Your rating: None

The Google Chrome team has released a blog post and a presentation that describes how their process can deliver a new version of Chrome to you every six weeks.  Maybe that is why, when I look at my logs, Chrome use has been climbing so quickly - up to 30% and gaining 2% per month. Their development process is very, very close to the "release early and often" process that I have been recommending, with at least one improvement.

The Chrome team can deliver a quality product with rapid evolution, while avoiding the stress that comes from arguing about scope ... and whether a feature needs to be in the release, and how they will stretch to squeeze it in, or change everybody's schedule .... They just release every six weeks, and people (read meddling managers) have confidence that if a feature doesn't make the release, there will be a new release soon.

Here is the basic structure:

The release goes out on time.  If a feature isn't ready, it just gets moved to the next release.

Features that are not ready can be disabled (hidden).  They have a flag for that.  After you start stabilizing a release, the only major thing that you can do is disable a feature, and you can do that by patching one little configuration.

They don't do development on "long running branches" and then do a big merge.  Everyone works most of the time on one trunk branch.

They have a long "feature freeze" period for testing, translation, and stabilization.  They make a release branch every six weeks and feature freeze it, and then the development team just keeps going. The release team spends the next six weeks stabilizing and building the production release. So, at any time, there is one development version, and one version in stabilization.

chrome dev cycle resized 600

 

Andy's Notes

I usually ask for a feature show/hide flag if I think a feature is going to pose a problem.  I now realize that I should ask for this on everything.  That's the improvement.

Before we get to the end of a release cycle (milestone), we create a new release milestone and start moving tickets.  This allows us to focus on the stuff that will be completed with quality.

Agile planning tools try to trick you into believing that you need to finish a complete feature inside one release.  This defeats the whole idea of incremental development and causes the sort of stress that makes releases difficult.  We designed the Assembla agile planner so that you can schedule the tasks for one feature into more than one milestone, encouraging a calmer incremental approach.

I also believe that developers should work together on basically the same code, with one branch, and that maintaining and merging feature branches is often not worth the effort.  Assembla will be introducing new workflows for git that make it easier to push, test, and review code in a shared repository.

The reality of continuous development processes is that you do need freeze time to stabilize a production quality release.  This overlapping approach, where you cut a stabilization branch, is the realistic approach to delivering quality releases.

In this process there are three live versions of the software at any given time - the production release, the release that is in stabilization, and the development version.  That's easy to manage for something self-contained like a browser that just requires some local disk space.  If you build Web server software, it's not so easy to have three completely separate environments with databases etc. ( development, stabilization, and production).  It's easier on the infrastructure side to have just one dev/stabilization environment, and switch your team from development mode to stabilization mode on that environment. However, this is the age of cloud computing and on-demand infrastructure.  An investment in infrastructure to build that third environment will allow your dev team to keep going.

0
Your rating: None