Skip navigation
Help

Eucalyptus

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Ben Cherian

software380

Image copyright isak55

In every emerging technology market, hype seems to wax and wane. One day a new technology is red hot, the next day it’s old hat. Sometimes the hype tends to pan out and concepts such as “e-commerce” become a normal way to shop. Other times the hype doesn’t meet expectations, and consumers don’t buy into paying for e-commerce using Beenz or Flooz. Apparently, Whoopi Goldberg and a slew of big name VCs ended up making a bad bet on the e-currency market in the late 1990s. Whoopi was paid in cash and shares of Flooz. At least, she wasn’t paid in Flooz alone! When investing, some bets are great and others are awful, but often, one only knows the awful ones in retrospect.

What Does “Software Defined” Mean?

In the infrastructure space, there is a growing trend of companies calling themselves “software defined (x).” Often, it’s a vendor that is re-positioning a decades-old product. On occasion, though, it’s smart, nimble startups and wise incumbents seeing a new way of delivering infrastructure. Either way, the term “software defined” is with us to stay, and there is real meaning and value behind it if you look past the hype.

There are three software defined terms that seem to be bandied around quite often: software defined networking, software defined storage, and the software defined data center. I suspect new terms will soon follow, like software defined security and software defined management. What all these “software-defined” concepts really boil down to is: Virtualization of the underlying component and accessibility through some documented API to provision, operate and manage the low-level component.

This trend started once Amazon Web Services came onto the scene and convinced the world that the data center could be abstracted into much smaller units and could be treated as disposable pieces of technology, which in turn could be priced as a utility. Vendors watched Amazon closely and saw how this could apply to the data center of the future.

Since compute was already virtualized by VMware and Xen, projects such as Eucalyptus were launched with the intention to be a “cloud controller” that would manage the virtualized servers and provision virtual machines (VMs). Virtualized storage (a.k.a. software defined storage) was a core part of the offering and projects like OpenStack Swift and Ceph showed the world that storage could be virtualized and accessed programmatically. Today, software defined networking is the new hotness and companies like Midokura, VMware/Nicira, Big Switch and Plexxi are changing the way networks are designed and automated.

The Software Defined Data Center

The software defined data center encompasses all the concepts of software defined networking, software defined storage, cloud computing, automation, management and security. Every low-level infrastructure component in a data center can be provisioned, operated, and managed through an API. Not only are there tenant-facing APIs, but operator-facing APIs which help the operator automate tasks which were previously manual.

An infrastructure superhero might think, “With great accessibility comes great power.” The data center of the future will be the software defined data center where every component can be accessed and manipulated through an API. The proliferation of APIs will change the way people work. Programmers who have never formatted a hard drive will now be able to provision terabytes of data. A web application developer will be able to set up complex load balancing rules without ever logging into a router. IT organizations will start automating the most mundane tasks. Eventually, beautiful applications will be created that mimic the organization’s process and workflow and will automate infrastructure management.

IT Organizations Will Respond and Adapt Accordingly

Of course, this means the IT organization will have to adapt. The new base level of knowledge in IT will eventually include some sort of programming knowledge. Scripted languages like Ruby and Python will soar even higher in popularity. The network administrators will become programmers. The system administrators will become programmers. During this time, DevOps (development + operations) will make serious inroads in the enterprise and silos will be refactored, restructured or flat-out broken down.

Configuration management tools like Chef and Puppet will be the glue for the software defined data center. If done properly, the costs around delivering IT services will be lowered. “Ghosts in the system” will watch all the components (compute, storage, networking, security, etc.) and adapt to changes in real-time to increase utilization, performance, security and quality of service. Monitoring and analytics will be key to realizing this software defined future.

Big Changes in Markets Happen With Very Simple Beginnings

All this amazing innovation comes from two very simple concepts — virtualizing the underlying components and making it accessible through an API.

The IT world might look at the software defined data center and say this is nothing new. We’ve been doing this since the 80s. I disagree. What’s changed is our universal thinking about accessibility. Ten years ago, we wouldn’t have blinked if a networking product came out without an API. Today, an API is part of what we consider a 1.0 release. This thinking is pervasive throughout the data center today with every component. It’s Web 2.0 thinking that shaped cloud computing and now cloud computing is bleeding into enterprise thinking. We’re no longer constrained by the need to have deep specialized knowledge in the low-level components to get basic access to this technology.

With well documented APIs, we have now turned the entire data center into many instruments that can be played by the IT staff (musicians). I imagine the software defined data center to be a Fantasia-like world where Mickey is the IT staff and the brooms are networking, storage, compute and security. The magic is in the coordination, cadence and rhythm of how all the pieces work together. Amazing symphonies of IT will occur in the near future and this is the reason the software defined data center is not a trend to overlook. Maybe Whoopi should take a look at this market instead.

Ben Cherian is a serial entrepreneur who loves playing in the intersection of business and technology. He’s currently the Chief Strategy Officer at Midokura, a network virtualization company. Prior to Midokura, he was the GM of Emerging Technologies at DreamHost, where he ran the cloud business unit. Prior to that, Ben ran a cloud-focused managed services company.

0
Your rating: None

Not everyone wants to run their applications on the public cloud. Their reasons can vary widely. Some companies don’t want the crown jewels of their intellectual property leaving the confines of their own premises. Some just like having things run on a server they can see and touch.

But there’s no denying the attraction of services like Amazon Web Services or Joyent or Rackspace, where you can spin up and configure a new virtual machine within minutes of figuring out that you need it. So, many companies seek to approximate the experience they would get from a public cloud provider on their own internal infrastructure.

It turns out that a start-up I had never heard of before this week is the most widely deployed platform for running these “private clouds,” and it’s not a bad business. Eucalyptus Systems essentially enables the same functionality on your own servers that you would expect from a cloud provider.

Eucalyptus said today that it has raised a $30 million Series C round of venture capital funding led by Institutional Venture Partners. Steve Harrick, general partner at IVP, will join the Eucalyptus board. Existing investors, including Benchmark Capital, BV Capital and New Enterprise Associates, are also in on the round. The funding brings Eucalyptus’ total capital raised to north of $50 million.

The company has an impressive roster of customers: Sony, Intercontinental Hotels, Raytheon, and the athletic-apparel group Puma. There are also several government customers, including the U.S. Food and Drug Administration, NASA, the U.S. Department of Agriculture and the Department of Defense.

In March, Eucalyptus signed a deal with Amazon to allow customers of both to migrate their workloads between the private and public environments. The point here is to give companies the flexibility they need to run their computing workloads in a mixed environment, or move them back and forth as needed. They could also operate them in tandem.

Key to this is a provision of the deal with Amazon that gives Eucalyptus access to Amazon’s APIs. What that means is that you can run processes on your own servers that are fully compatible with Amazon’s Simple Storage Service (S3), or its Elastic Compute cloud, known as EC2. “We’ve removed all the hurdles that might have been in the way of moving workloads,” Eucalyptus CEO Marten Mickos told me. The company has similar deals in place with Wipro Infotech in India and CETC32 in China.

0
Your rating: None

Clouds

I get press releases every week about some new (or old!) company and their so-called cloud solution. Some folks are clearly abusing the popularity of the “cloud” buzzword, and others are actually doing interesting things with distributed computing, infrastructure- and platform-as-a-service, orchestration, and related technologies. Amazon is the prime mover on IaaS, but OpenStack, CloudStack and Eucalyptus are all making strong plays in that space. VMware’s Cloud Foundry and Red Hat’s OpenShift are pushing open source PaaS, while services like Heroku, Engine Yard and dotCloud (among others) are pushing to be your hosted PaaS solution.

It’s not surprising that so many people are looking to differentiate their cloud solutions, and on the balance I think competition is a good thing that eventually benefits end-users. But as things stand today, it strikes me as exceedingly hard to formulate a comprehensive “cloud strategy” given the plethora of options.

If you care strongly about open source, that helps limit your options. VMware’s Cloud Foundry has been open source for quite some time, and recently celebrated its first birthday. Red Hat’s OpenShift is not yet open source, but work is underway to remedy that. Red Hat, obviously, has a long history of successfully open sourcing their work. Red Hat also recently announced that they would be a platinum member of the newly reorganized OpenStack governing board. VMware, on the other hand, is not a company with which I readily associate open source culture or success; and I don’t see a very robust ecosystem coalescing around Cloud Foundry. Hopefully that situation improves.

And there’s also Canonical, the folks behind the Ubuntu Linux distribution. Canonical has made a real effort to advocate for OpenStack, but their actual contributions to OpenStack don’t seem to tell the same story. Rather than focus on directly contributing to IaaS or PaaS offerings, Canonical is busy making helper products like Metal-as-a-Service and their newly announced “Any Web Service over Me” (with the righteous acronym AWESOME) which aims to provide an API abstraction layer to facilitate running workloads on Amazon’s cloud and on an OpenStack cloud.

The end result of all of this a lot of ambiguity for customers and companies looking to deploy cloud solutions. If you want a private cloud, it doesn’t seem to me that you can make a decision without first reaching a decision as to whether or not you will eventually need to use public cloud resources. If so, your choice of private cloud technology demonstrably hangs on the long-term viability of your intended public cloud target. If you think Amazon is where it’s at for public cloud, then it seems that Eucalyptus is what you build your private cloud on (unless you want to fiddle with even more technology and implement Canonical’s AWESOME). If you think Rackspace is where it’s at, then OpenStack is a more appealing choice for you. But what if you’re wrong about your choice of public cloud provider?

As such, I’m curious to learn what you — the reader — are currently doing. Have you made a technology decision? Did you go all in, or are you leaving room to shift to a different provider if need be? Did you go IaaS or PaaS? Are you a new company, or are you an established organization moving existing workloads to new platforms? Finally, I’m particularly interested to hear from folks in regulated industries — banking, health care, insurance, etc — where your decision as to where to run your applications may be predicated on legal issues.

0
Your rating: None