Skip navigation
Help

VMware

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Ben Cherian

software380

Image copyright isak55

In every emerging technology market, hype seems to wax and wane. One day a new technology is red hot, the next day it’s old hat. Sometimes the hype tends to pan out and concepts such as “e-commerce” become a normal way to shop. Other times the hype doesn’t meet expectations, and consumers don’t buy into paying for e-commerce using Beenz or Flooz. Apparently, Whoopi Goldberg and a slew of big name VCs ended up making a bad bet on the e-currency market in the late 1990s. Whoopi was paid in cash and shares of Flooz. At least, she wasn’t paid in Flooz alone! When investing, some bets are great and others are awful, but often, one only knows the awful ones in retrospect.

What Does “Software Defined” Mean?

In the infrastructure space, there is a growing trend of companies calling themselves “software defined (x).” Often, it’s a vendor that is re-positioning a decades-old product. On occasion, though, it’s smart, nimble startups and wise incumbents seeing a new way of delivering infrastructure. Either way, the term “software defined” is with us to stay, and there is real meaning and value behind it if you look past the hype.

There are three software defined terms that seem to be bandied around quite often: software defined networking, software defined storage, and the software defined data center. I suspect new terms will soon follow, like software defined security and software defined management. What all these “software-defined” concepts really boil down to is: Virtualization of the underlying component and accessibility through some documented API to provision, operate and manage the low-level component.

This trend started once Amazon Web Services came onto the scene and convinced the world that the data center could be abstracted into much smaller units and could be treated as disposable pieces of technology, which in turn could be priced as a utility. Vendors watched Amazon closely and saw how this could apply to the data center of the future.

Since compute was already virtualized by VMware and Xen, projects such as Eucalyptus were launched with the intention to be a “cloud controller” that would manage the virtualized servers and provision virtual machines (VMs). Virtualized storage (a.k.a. software defined storage) was a core part of the offering and projects like OpenStack Swift and Ceph showed the world that storage could be virtualized and accessed programmatically. Today, software defined networking is the new hotness and companies like Midokura, VMware/Nicira, Big Switch and Plexxi are changing the way networks are designed and automated.

The Software Defined Data Center

The software defined data center encompasses all the concepts of software defined networking, software defined storage, cloud computing, automation, management and security. Every low-level infrastructure component in a data center can be provisioned, operated, and managed through an API. Not only are there tenant-facing APIs, but operator-facing APIs which help the operator automate tasks which were previously manual.

An infrastructure superhero might think, “With great accessibility comes great power.” The data center of the future will be the software defined data center where every component can be accessed and manipulated through an API. The proliferation of APIs will change the way people work. Programmers who have never formatted a hard drive will now be able to provision terabytes of data. A web application developer will be able to set up complex load balancing rules without ever logging into a router. IT organizations will start automating the most mundane tasks. Eventually, beautiful applications will be created that mimic the organization’s process and workflow and will automate infrastructure management.

IT Organizations Will Respond and Adapt Accordingly

Of course, this means the IT organization will have to adapt. The new base level of knowledge in IT will eventually include some sort of programming knowledge. Scripted languages like Ruby and Python will soar even higher in popularity. The network administrators will become programmers. The system administrators will become programmers. During this time, DevOps (development + operations) will make serious inroads in the enterprise and silos will be refactored, restructured or flat-out broken down.

Configuration management tools like Chef and Puppet will be the glue for the software defined data center. If done properly, the costs around delivering IT services will be lowered. “Ghosts in the system” will watch all the components (compute, storage, networking, security, etc.) and adapt to changes in real-time to increase utilization, performance, security and quality of service. Monitoring and analytics will be key to realizing this software defined future.

Big Changes in Markets Happen With Very Simple Beginnings

All this amazing innovation comes from two very simple concepts — virtualizing the underlying components and making it accessible through an API.

The IT world might look at the software defined data center and say this is nothing new. We’ve been doing this since the 80s. I disagree. What’s changed is our universal thinking about accessibility. Ten years ago, we wouldn’t have blinked if a networking product came out without an API. Today, an API is part of what we consider a 1.0 release. This thinking is pervasive throughout the data center today with every component. It’s Web 2.0 thinking that shaped cloud computing and now cloud computing is bleeding into enterprise thinking. We’re no longer constrained by the need to have deep specialized knowledge in the low-level components to get basic access to this technology.

With well documented APIs, we have now turned the entire data center into many instruments that can be played by the IT staff (musicians). I imagine the software defined data center to be a Fantasia-like world where Mickey is the IT staff and the brooms are networking, storage, compute and security. The magic is in the coordination, cadence and rhythm of how all the pieces work together. Amazing symphonies of IT will occur in the near future and this is the reason the software defined data center is not a trend to overlook. Maybe Whoopi should take a look at this market instead.

Ben Cherian is a serial entrepreneur who loves playing in the intersection of business and technology. He’s currently the Chief Strategy Officer at Midokura, a network virtualization company. Prior to Midokura, he was the GM of Emerging Technologies at DreamHost, where he ran the cloud business unit. Prior to that, Ben ran a cloud-focused managed services company.

0
Your rating: None

Fifteen years ago, you weren't a participant in the digital age unless you had your own homepage. Even in the late 1990s, services abounded to make personal pages easy to build and deploy—the most famous is the now-defunct GeoCities, but there were many others (remember Angelfire and Tripod?). These were the days before the "social" Web, before MySpace and Facebook. Instant messaging was in its infancy and creating an online presence required no small familiarity with HTML (though automated Web design programs did exist).

Things are certainly different now, but there's still a tremendous amount of value in controlling an actual honest-to-God website rather than relying solely on the social Web to provide your online presence. The flexibility of being able to set up and run anything at all, be it a wiki or a blog with a tipjar or a photo hosting site, is awesome. Further, the freedom to tinker with both the operating system and the Web server side of the system is an excellent learning opportunity.

The author's closet. Servers tend to multiply, like rabbits. Lee Hutchinson

It's super-easy to open an account at a Web hosting company and start fiddling around there—two excellent Ars reader-recommended Web hosts are A Small Orange and Lithium Hosting—but where's the fun in that? If you want to set up something to learn how it works, the journey is just as important as the destination. Having a ready-made Web or application server cuts out half of the work and thus half of the journey. In this guide, we're going to walk you through everything you need to set up your own Web server, from operating system choice to specific configuration options.

Read 90 remaining paragraphs | Comments

0
Your rating: None


Leaders in Big Data

Google Tech Talk October 22, 2012 ABSTRACT Discussing the evolution, current opportunities and future trends in big data Presented by Google and the Fung Institute at UC Berkeley SPEAKERS: Moderator: Hal Varian, an economist specializing in microeconomics and information economics. He is the Chief Economist at Google and he holds the title of emeritus professor at the University of California, Berkeley where he was founding dean of the School of Information. Panelists: Theo Vassilakis, Principal Engineer/Engineering Director at Google Gustav Horn, Senior Global Consulting Engineer, Hadoop at NetApp Charles Fan, Senior Vice President at VMware in strategic R&D
From:
GoogleTechTalks
Views:
4980

77
ratings
Time:
58:47
More in
Science & Technology

0
Your rating: None

AT&T's Toggle lets users switch between the work and personal parts of their smartphones.

AT&T

AT&T says it has the answer for corporations that want to let employees access work applications from personal phones without becoming a security threat. A new virtualization-style technology that works on both Android and iPhones creates a work container that is isolated from an employee's personal applications and data, letting IT shops manage just the portion of the phone related to work.

This isn't a new idea. ARM is talking about adding virtualization into the smartphone chip layer. VMware has been promising to virtualize smartphones for some time. What is notable about AT&T's technology is its flexibility. VMware's technology hasn't hit end users yet, largely because it must be pre-installed by phone manufacturers, limiting it to carriers and device makers that want to install it on their hardware.

AT&T's "Toggle" technology, meanwhile, works with any Android device from versions 2.2 to 3.x, as well as iPhones, and can be installed after a user buys it. Moreover, the technology is somewhat separate from AT&T's cellular division and can be used with any carrier.

Read more | Comments

0
Your rating: None

Clouds

I get press releases every week about some new (or old!) company and their so-called cloud solution. Some folks are clearly abusing the popularity of the “cloud” buzzword, and others are actually doing interesting things with distributed computing, infrastructure- and platform-as-a-service, orchestration, and related technologies. Amazon is the prime mover on IaaS, but OpenStack, CloudStack and Eucalyptus are all making strong plays in that space. VMware’s Cloud Foundry and Red Hat’s OpenShift are pushing open source PaaS, while services like Heroku, Engine Yard and dotCloud (among others) are pushing to be your hosted PaaS solution.

It’s not surprising that so many people are looking to differentiate their cloud solutions, and on the balance I think competition is a good thing that eventually benefits end-users. But as things stand today, it strikes me as exceedingly hard to formulate a comprehensive “cloud strategy” given the plethora of options.

If you care strongly about open source, that helps limit your options. VMware’s Cloud Foundry has been open source for quite some time, and recently celebrated its first birthday. Red Hat’s OpenShift is not yet open source, but work is underway to remedy that. Red Hat, obviously, has a long history of successfully open sourcing their work. Red Hat also recently announced that they would be a platinum member of the newly reorganized OpenStack governing board. VMware, on the other hand, is not a company with which I readily associate open source culture or success; and I don’t see a very robust ecosystem coalescing around Cloud Foundry. Hopefully that situation improves.

And there’s also Canonical, the folks behind the Ubuntu Linux distribution. Canonical has made a real effort to advocate for OpenStack, but their actual contributions to OpenStack don’t seem to tell the same story. Rather than focus on directly contributing to IaaS or PaaS offerings, Canonical is busy making helper products like Metal-as-a-Service and their newly announced “Any Web Service over Me” (with the righteous acronym AWESOME) which aims to provide an API abstraction layer to facilitate running workloads on Amazon’s cloud and on an OpenStack cloud.

The end result of all of this a lot of ambiguity for customers and companies looking to deploy cloud solutions. If you want a private cloud, it doesn’t seem to me that you can make a decision without first reaching a decision as to whether or not you will eventually need to use public cloud resources. If so, your choice of private cloud technology demonstrably hangs on the long-term viability of your intended public cloud target. If you think Amazon is where it’s at for public cloud, then it seems that Eucalyptus is what you build your private cloud on (unless you want to fiddle with even more technology and implement Canonical’s AWESOME). If you think Rackspace is where it’s at, then OpenStack is a more appealing choice for you. But what if you’re wrong about your choice of public cloud provider?

As such, I’m curious to learn what you — the reader — are currently doing. Have you made a technology decision? Did you go all in, or are you leaving room to shift to a different provider if need be? Did you go IaaS or PaaS? Are you a new company, or are you an established organization moving existing workloads to new platforms? Finally, I’m particularly interested to hear from folks in regulated industries — banking, health care, insurance, etc — where your decision as to where to run your applications may be predicated on legal issues.

0
Your rating: None

judgecorp writes "The OpenStack open source cloud project has removed Hyper-V from its infrastructure as a service (IaaS) framework, saying Microsoft's support for its hypervisor technology is 'broken.' This will embarass Microsoft, as major partners such as Dell and HP support OpenStack, along with service providers such as Internap." Adds reader alphadogg, this "means the code will be removed when the next version of OpenStack, called Essex, is released in the second quarter."


Share on Google+

Read more of this story at Slashdot.

0
Your rating: None

New submitter Lashat writes "According to Ars Technica, 'A new survey seems to show that VMware's iron grip on the enterprise virtualization market is loosening, with 38 percent of businesses planning to switch vendors within the next year due to licensing models and the robustness of competing hypervisors.' What do IT-savvy Slashdotters have to say about moving away from one of the more stable and feature rich VM architectures available?"

Read more of this story at Slashdot.

0
Your rating: None


GTAC 2011: Lightning Talks I

6th Annual Google Test Automation Conference 2011 (GTAC 2011) "Cloudy With A Chance Of Tests" Computer History Museum Mountain View, CA USA October 26-27, 2011 Lightning Talks I: A series of 5-minute technical talks about cloud testing. More information about talks and speakers here: www.gtac.biz Testing Cloud Failover Roussi Roussev, VMware Behind Salesforce Cloud: Test Automation Cloud and Yoda Chris Chen, Salesforce ABFT in the Cloud Timothy Crooks, CygNet ScriptCover: Javascript Coverage Analysis Tool Ekaterina Kamenskaya, Google Cloud Sourcing - Realistic Performance, Load and Stress Testing Sai Chintala, AppLabs
From:
GoogleTechTalks
Views:
695

10
ratings
Time:
28:36
More in
Science & Technology

0
Your rating: None