Skip navigation
Help

Data center

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Ben Cherian

software380

Image copyright isak55

In every emerging technology market, hype seems to wax and wane. One day a new technology is red hot, the next day it’s old hat. Sometimes the hype tends to pan out and concepts such as “e-commerce” become a normal way to shop. Other times the hype doesn’t meet expectations, and consumers don’t buy into paying for e-commerce using Beenz or Flooz. Apparently, Whoopi Goldberg and a slew of big name VCs ended up making a bad bet on the e-currency market in the late 1990s. Whoopi was paid in cash and shares of Flooz. At least, she wasn’t paid in Flooz alone! When investing, some bets are great and others are awful, but often, one only knows the awful ones in retrospect.

What Does “Software Defined” Mean?

In the infrastructure space, there is a growing trend of companies calling themselves “software defined (x).” Often, it’s a vendor that is re-positioning a decades-old product. On occasion, though, it’s smart, nimble startups and wise incumbents seeing a new way of delivering infrastructure. Either way, the term “software defined” is with us to stay, and there is real meaning and value behind it if you look past the hype.

There are three software defined terms that seem to be bandied around quite often: software defined networking, software defined storage, and the software defined data center. I suspect new terms will soon follow, like software defined security and software defined management. What all these “software-defined” concepts really boil down to is: Virtualization of the underlying component and accessibility through some documented API to provision, operate and manage the low-level component.

This trend started once Amazon Web Services came onto the scene and convinced the world that the data center could be abstracted into much smaller units and could be treated as disposable pieces of technology, which in turn could be priced as a utility. Vendors watched Amazon closely and saw how this could apply to the data center of the future.

Since compute was already virtualized by VMware and Xen, projects such as Eucalyptus were launched with the intention to be a “cloud controller” that would manage the virtualized servers and provision virtual machines (VMs). Virtualized storage (a.k.a. software defined storage) was a core part of the offering and projects like OpenStack Swift and Ceph showed the world that storage could be virtualized and accessed programmatically. Today, software defined networking is the new hotness and companies like Midokura, VMware/Nicira, Big Switch and Plexxi are changing the way networks are designed and automated.

The Software Defined Data Center

The software defined data center encompasses all the concepts of software defined networking, software defined storage, cloud computing, automation, management and security. Every low-level infrastructure component in a data center can be provisioned, operated, and managed through an API. Not only are there tenant-facing APIs, but operator-facing APIs which help the operator automate tasks which were previously manual.

An infrastructure superhero might think, “With great accessibility comes great power.” The data center of the future will be the software defined data center where every component can be accessed and manipulated through an API. The proliferation of APIs will change the way people work. Programmers who have never formatted a hard drive will now be able to provision terabytes of data. A web application developer will be able to set up complex load balancing rules without ever logging into a router. IT organizations will start automating the most mundane tasks. Eventually, beautiful applications will be created that mimic the organization’s process and workflow and will automate infrastructure management.

IT Organizations Will Respond and Adapt Accordingly

Of course, this means the IT organization will have to adapt. The new base level of knowledge in IT will eventually include some sort of programming knowledge. Scripted languages like Ruby and Python will soar even higher in popularity. The network administrators will become programmers. The system administrators will become programmers. During this time, DevOps (development + operations) will make serious inroads in the enterprise and silos will be refactored, restructured or flat-out broken down.

Configuration management tools like Chef and Puppet will be the glue for the software defined data center. If done properly, the costs around delivering IT services will be lowered. “Ghosts in the system” will watch all the components (compute, storage, networking, security, etc.) and adapt to changes in real-time to increase utilization, performance, security and quality of service. Monitoring and analytics will be key to realizing this software defined future.

Big Changes in Markets Happen With Very Simple Beginnings

All this amazing innovation comes from two very simple concepts — virtualizing the underlying components and making it accessible through an API.

The IT world might look at the software defined data center and say this is nothing new. We’ve been doing this since the 80s. I disagree. What’s changed is our universal thinking about accessibility. Ten years ago, we wouldn’t have blinked if a networking product came out without an API. Today, an API is part of what we consider a 1.0 release. This thinking is pervasive throughout the data center today with every component. It’s Web 2.0 thinking that shaped cloud computing and now cloud computing is bleeding into enterprise thinking. We’re no longer constrained by the need to have deep specialized knowledge in the low-level components to get basic access to this technology.

With well documented APIs, we have now turned the entire data center into many instruments that can be played by the IT staff (musicians). I imagine the software defined data center to be a Fantasia-like world where Mickey is the IT staff and the brooms are networking, storage, compute and security. The magic is in the coordination, cadence and rhythm of how all the pieces work together. Amazing symphonies of IT will occur in the near future and this is the reason the software defined data center is not a trend to overlook. Maybe Whoopi should take a look at this market instead.

Ben Cherian is a serial entrepreneur who loves playing in the intersection of business and technology. He’s currently the Chief Strategy Officer at Midokura, a network virtualization company. Prior to Midokura, he was the GM of Emerging Technologies at DreamHost, where he ran the cloud business unit. Prior to that, Ben ran a cloud-focused managed services company.

0
Your rating: None
Original author: 
Sean Gallagher

Think mobile devices are low-power? A study by the Center for Energy-Efficient Telecommunications—a joint effort between AT&T's Bell Labs and the University of Melbourne in Australia—finds that wireless networking infrastructure worldwide accounts for 10 times more power consumption than data centers worldwide. In total, it is responsible for 90 percent of the power usage by cloud infrastructure. And that consumption is growing fast.

The study was in part a rebuttal to a Greenpeace report that focused on the power consumption of data centers. "The energy consumption of wireless access dominates data center consumption by a significant margin," the authors of the CEET study wrote. One of the findings of the CEET researchers was that wired networks and data-center based applications could actually reduce overall computing energy consumption by allowing for less powerful client devices.

According to the CEET study, by 2015, wireless "cloud" infrastructure will consume as much as 43 terawatt-hours of electricity worldwide while generating 30 megatons of carbon dioxide. That's the equivalent of 4.9 million automobiles worth of carbon emissions. This projected power consumption is a 460 percent increase from the 9.2 TWh consumed by wireless infrastructure in 2012.

Read 1 remaining paragraphs | Comments

0
Your rating: None

CloudFlare's CDN is based on Anycast, a standard defined in the Border Gateway Protocol—the routing protocol that's at the center of how the Internet directs traffic. Anycast is part of how BGP supports the multi-homing of IP addresses, in which multiple routers connect a network to the Internet; through the broadcasts of IP addresses available through a router, other routers determine the shortest path for network traffic to take to reach that destination.

Using Anycast means that CloudFlare makes the servers it fronts appear to be in many places, while only using one IP address. "If you do a traceroute to Metallica.com (a CloudFlare customer), depending on where you are in the world, you would hit a different data center," Prince said. "But you're getting back the same IP address."

That means that as CloudFlare adds more data centers, and those data centers advertise the IP addresses of the websites that are fronted by the service, the Internet's core routers automatically re-map the routes to the IP addresses of the sites. There's no need to do anything special with the Domain Name Service to handle load-balancing of network traffic to sites other than point the hostname for a site at CloudFlare's IP address. It also means that when a specific data center needs to be taken down for an upgrade or maintenance (or gets knocked offline for some other reason), the routes can be adjusted on the fly.

That makes it much harder for distributed denial of service attacks to go after servers behind CloudFlare's CDN network; if they're geographically widespread, the traffic they generate gets spread across all of CloudFlare's data centers—as long as the network connections at each site aren't overcome.

0
Your rating: None

Aurich Lawson / Thinkstock

The corporate data center is undergoing a major transformation the likes of which haven't been seen since Intel-based servers started replacing mainframes decades ago. It isn't just the server platform: the entire infrastructure from top to bottom is seeing major changes as applications migrate to private and public clouds, networks get faster, and virtualization becomes the norm.

All of this means tomorrow's data center is going to look very different from today's. Processors, systems, and storage are getting better integrated, more virtualized, and more capable at making use of greater networking and Internet bandwidth. At the heart of these changes are major advances in networking. We're going to examine six specific trends driving the evolution of the next-generation data center and discover what both IT insiders and end-user departments outside of IT need to do to prepare for these changes.

Beyond 10Gb networks

Network connections are getting faster to be sure. Today it's common to find 10-gigabit Ethernet (GbE) connections to some large servers. But even 10GbE isn't fast enough for data centers that are heavily virtualized or handling large-scale streaming audio/video applications. As your population of virtual servers increases, you need faster networks to handle the higher information loads required to operate. Starting up a new virtual server might save you from buying a physical server, but it doesn't lessen the data traffic over the network—in fact, depending on how your virtualization infrastructure works, a virtual server can impact the network far more than a physical one. And as more audio and video applications are used by ordinary enterprises in common business situations, the file sizes balloon too. This results in multi-gigabyte files that can quickly fill up your pipes—even the big 10Gb internal pipes that make up your data center's LAN.

Read 34 remaining paragraphs | Comments

0
Your rating: None

The inside of Equinix's co-location facility in San Jose—the home of CloudFlare's primary data center.

Photo: Peter McCollough/Wired.com

On August 22, CloudFlare, a content delivery network, turned on a brand new data center in Seoul, Korea—the last of ten new facilities started across four continents in a span of thirty days. The Seoul data center brought CloudFlare's number of data centers up to 23, nearly doubling the company's global reach—a significant feat in itself for a company of just 32 employees.

But there was something else relatively significant about the Seoul data center and the other 9 facilities set up this summer: despite the fact that the company owned every router and every server in their racks, and each had been configured with great care to handle the demands of CloudFlare's CDN and security services, no one from CloudFlare had ever set foot in them. All that came from CloudFlare directly was a six-page manual instructing facility managers and local suppliers on how to rack and plug in the boxes shipped to them.

"We have nobody stationed in Stockholm or Seoul or Sydney, or a lot of the places that we put these new data centers," CloudFlare CEO Matthew Prince told Ars. "In fact, no CloudFlare employees have stepped foot in half of the facilities where we've launched." The totally remote-controlled data center approach used by the company is one of the reasons that CloudFlare can afford to provide its services for free to most of its customers—and still make a 75 percent profit margin.

Read 24 remaining paragraphs | Comments

0
Your rating: None

rain cloudWe recently moved from Amazon on-demand “cloud” hosting to our own dedicated servers.  It took about three months to order and set up the new servers versus a few minutes to get servers on Amazon.  However, the new servers are 2.5X faster and so far, more reliable.

We love Amazon for fostering development and innovation.  Cloud computing systems are great at getting you new servers.  This helps a lot when you are trying to innovate because you can quickly get new servers for your new services. If you are in a phase of trying new things, cloud hosts will help you.

Cloud hosts also help a lot when you are testing.  It’s amazing how many servers it takes to run an Internet service.  You don’t just need production systems.  You need failover systems.  You need development systems.  You need staging/QA systems.  You will need a lot of servers, and you may need to go to a cloud host.

However, there are problems with cloud hosting that emerge if you need high data throughput.  The problems aren’t with the servers but instead, with storage and networking.  To see why, let’s look at how a cloud architecture differs from a local box architecture.  You can’t directly attach each storage location to the box that it servers.  You have to use network attached storage.

DEDICATED ARCHITECTURE:  Server Box -> bus or lan or SAN -> Storage

CLOUD ARCHITECTURE:  Server Box -> Mesh network -> Storage cluster with network replication

1) Underlying problem:  Big data, slow networks

Network attached storage becomes a problem because there is a fundamental mismatch between networking and storage.  Storage capacity almost doubles every year.  Networking speed grows by a factor of ten about every 10 years – 100 times lower.  The net result is that storage gets much bigger than network capacity, and it takes a really long time to copy data over a network.  I first heard this trend analyzed by John Landry, who called it “Landry’s law.”  In my experience, this problem has gotten to the point where even sneakernet (putting on sneakers and carrying data on big storage media) cannot save us because after you lace up your sneakers, you have to copy the data OVER A NETWORK to get it onto the storage media and then copy it again to get it off.  When we replicated the Assembla data to the new datacenter, we realized that it would be slower to do those two copies than to replicate over  the Internet, which is slower than sneakernet for long distance transport but only requires one local network copy.

2) Mesh network inconsistency

The Internet was designed as a hub and spoke network, and that part of it works great.  When you send a packet up from your spoke, it travels a predictable route through various hubs to its destination.  When you plug dedicated servers into the Internet, you plug a spoke into the hub, and it works in the traditional way.  The IP network inside a cloud datacenter is more of a “mesh.”  Packets can take a variety of routes between the servers and the storage.  The mesh component is vulnerable to both packet loss and capacity problems.  I can’t present any technical reason why this is true, but in our observation, it is true.  We have seen two different issues:

* Slowdowns and brownouts:  This is a problem at both Amazon and GoGrid, but it is easier to see at Amazon.  Their network, and consequently their storage, has variable performance, with slow periods that I call “brownouts.”

* Packet loss:  This is related to the capacity problems as routers will throw away packets when they are overloaded.  However, the source of the packet loss seems to be much harder to debug in a mesh network.  We see these problems on the GoGrid network, and their attempts to diagnose it are often ineffectual.

3) Replication stoppages

The second goal of cloud computing is to provide high availability. The first goal is to never lose data.  When there is a failure in the storage cluster, the first goal (don’t lose data) kicks in and stomps on the second goal (high availability).  Systems will stop accepting new data and make sure that old data gets replicated.  Network attached storage will typically start replicating data to a new node.  It may either refuse new data until it can be replicated reliably, or it will absorb all network capacity and block normal operation in the mesh.

Note that in a large complex systems, variations in both network speed and storage capacity will follow a power law distribution.  This happens "chaotically."  When the variation reaches a certain low level of performance, the system fails because of the replication problem. 

I think that we should be able to predict the rate of major failures by observing the smaller variations and extrapolating them with a power law.  Amazon had  a major outage in April 2011. Throughout the previous 18 months, they had performance brownouts, and I think the frequency of one could be predicted from the other.

CONCLUSION

So, if your application is storage intensive and high availability, you must either:

1) Design it so that lots of replication is running all of the time, and you can afford to lose access to any specific storage node.  This places limits on the speed that your application can absorb data because you need to reserve a big percentage of scarce network capacity for replication.  So, you will have only a small percentage of network capacity available to for absorbing external data.  However, it is the required architecture for very large systems.  It  works well if you have a high ratio of output to input, since output just uses the replicated data rather than adding to it.

If you try this replication strategy, you will need to deal with two engineering issues.  First, you will think through replication specifically for your application.  There are many new database architectures that make this tradeoff in various ways.  Each has strengths and weaknesses, so if you design a distributed system, you will probably end up using several of these new architectures.  Second, you will need to distribute across multiple mesh network locations. It's not enough just to have several places to get your data, in the same network neighborhood.  If there is a problem, the entire mesh will jam up.  Ask about this.

2) Use local storage

0
Your rating: None

[Video Link, by Beschizza.]

As part of an ongoing hunt for LulzSec, the FBI raided DigitalOne's data center today. But sloppy work by agents in charge of the raid caused unrelated websites to go offline. The victims included Instapaper, and various restaurant and real estate sites belonging to Curbed Network. From the New York Times:

DigitalOne provided all necessary information to pinpoint the servers for a specific I.P. address, Mr. Ostroumow said. However, the agents took entire server racks, perhaps because they mistakenly thought that "one enclosure is = to one server," he said in an e-mail.

DigitalOne had no employees on-site when the raid took place. The data center operator, from which DigitalOne leases space, passed along the information about the raid three hours after it started with the name of the agent and a phone number to call.

0
Your rating: None