Skip navigation
Help

VMware

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
Ben Cherian

software380

Image copyright isak55

In every emerging technology market, hype seems to wax and wane. One day a new technology is red hot, the next day it’s old hat. Sometimes the hype tends to pan out and concepts such as “e-commerce” become a normal way to shop. Other times the hype doesn’t meet expectations, and consumers don’t buy into paying for e-commerce using Beenz or Flooz. Apparently, Whoopi Goldberg and a slew of big name VCs ended up making a bad bet on the e-currency market in the late 1990s. Whoopi was paid in cash and shares of Flooz. At least, she wasn’t paid in Flooz alone! When investing, some bets are great and others are awful, but often, one only knows the awful ones in retrospect.

What Does “Software Defined” Mean?

In the infrastructure space, there is a growing trend of companies calling themselves “software defined (x).” Often, it’s a vendor that is re-positioning a decades-old product. On occasion, though, it’s smart, nimble startups and wise incumbents seeing a new way of delivering infrastructure. Either way, the term “software defined” is with us to stay, and there is real meaning and value behind it if you look past the hype.

There are three software defined terms that seem to be bandied around quite often: software defined networking, software defined storage, and the software defined data center. I suspect new terms will soon follow, like software defined security and software defined management. What all these “software-defined” concepts really boil down to is: Virtualization of the underlying component and accessibility through some documented API to provision, operate and manage the low-level component.

This trend started once Amazon Web Services came onto the scene and convinced the world that the data center could be abstracted into much smaller units and could be treated as disposable pieces of technology, which in turn could be priced as a utility. Vendors watched Amazon closely and saw how this could apply to the data center of the future.

Since compute was already virtualized by VMware and Xen, projects such as Eucalyptus were launched with the intention to be a “cloud controller” that would manage the virtualized servers and provision virtual machines (VMs). Virtualized storage (a.k.a. software defined storage) was a core part of the offering and projects like OpenStack Swift and Ceph showed the world that storage could be virtualized and accessed programmatically. Today, software defined networking is the new hotness and companies like Midokura, VMware/Nicira, Big Switch and Plexxi are changing the way networks are designed and automated.

The Software Defined Data Center

The software defined data center encompasses all the concepts of software defined networking, software defined storage, cloud computing, automation, management and security. Every low-level infrastructure component in a data center can be provisioned, operated, and managed through an API. Not only are there tenant-facing APIs, but operator-facing APIs which help the operator automate tasks which were previously manual.

An infrastructure superhero might think, “With great accessibility comes great power.” The data center of the future will be the software defined data center where every component can be accessed and manipulated through an API. The proliferation of APIs will change the way people work. Programmers who have never formatted a hard drive will now be able to provision terabytes of data. A web application developer will be able to set up complex load balancing rules without ever logging into a router. IT organizations will start automating the most mundane tasks. Eventually, beautiful applications will be created that mimic the organization’s process and workflow and will automate infrastructure management.

IT Organizations Will Respond and Adapt Accordingly

Of course, this means the IT organization will have to adapt. The new base level of knowledge in IT will eventually include some sort of programming knowledge. Scripted languages like Ruby and Python will soar even higher in popularity. The network administrators will become programmers. The system administrators will become programmers. During this time, DevOps (development + operations) will make serious inroads in the enterprise and silos will be refactored, restructured or flat-out broken down.

Configuration management tools like Chef and Puppet will be the glue for the software defined data center. If done properly, the costs around delivering IT services will be lowered. “Ghosts in the system” will watch all the components (compute, storage, networking, security, etc.) and adapt to changes in real-time to increase utilization, performance, security and quality of service. Monitoring and analytics will be key to realizing this software defined future.

Big Changes in Markets Happen With Very Simple Beginnings

All this amazing innovation comes from two very simple concepts — virtualizing the underlying components and making it accessible through an API.

The IT world might look at the software defined data center and say this is nothing new. We’ve been doing this since the 80s. I disagree. What’s changed is our universal thinking about accessibility. Ten years ago, we wouldn’t have blinked if a networking product came out without an API. Today, an API is part of what we consider a 1.0 release. This thinking is pervasive throughout the data center today with every component. It’s Web 2.0 thinking that shaped cloud computing and now cloud computing is bleeding into enterprise thinking. We’re no longer constrained by the need to have deep specialized knowledge in the low-level components to get basic access to this technology.

With well documented APIs, we have now turned the entire data center into many instruments that can be played by the IT staff (musicians). I imagine the software defined data center to be a Fantasia-like world where Mickey is the IT staff and the brooms are networking, storage, compute and security. The magic is in the coordination, cadence and rhythm of how all the pieces work together. Amazing symphonies of IT will occur in the near future and this is the reason the software defined data center is not a trend to overlook. Maybe Whoopi should take a look at this market instead.

Ben Cherian is a serial entrepreneur who loves playing in the intersection of business and technology. He’s currently the Chief Strategy Officer at Midokura, a network virtualization company. Prior to Midokura, he was the GM of Emerging Technologies at DreamHost, where he ran the cloud business unit. Prior to that, Ben ran a cloud-focused managed services company.

0
Your rating: None
Original author: 
Soulskill

New submitter einar2 writes "German hoster Hetzner informed customers that login data for their admin surface might have been compromised (Google translation of German original). At the end of last week, a backdoor in a monitoring server was found. Closer examination led to the discovery of a rootkit residing in memory. The rootkit does not touch files on storage but patches running processes in memory. Malicious code is directly injected into running processes. According to Hetzner the attack is surprisingly sophisticated."

Share on Google+

Read more of this story at Slashdot.

0
Your rating: None
Original author: 
Jon Brodkin


Ubuntu 13.04.

The stable release of Ubuntu 13.04 became available for download today, with Canonical promising performance and graphical improvements to help prepare the operating system for convergence across PCs, phones, and tablets.

"Performance on lightweight systems was a core focus for this cycle, as a prelude to Ubuntu’s release on a range of mobile form factors," Canonical said in an announcement today. "As a result 13.04 delivers significantly faster response times in casual use, and a reduced memory footprint that benefits all users."

Named "Raring Ringtail,"—the prelude to Saucy Salamander—Ubuntu 13.04 is the midway point in the OS' two-year development cycle. Ubuntu 12.04, the more stable, Long Term Support edition that is supported for five years, was released one year ago. Security updates are only promised for 9 months for interim releases like 13.04. Support windows for interim releases were recently cut from 18 months to 9 months to reduce the number of versions Ubuntu developers must support and let them focus on bigger and better things.

Read 11 remaining paragraphs | Comments

0
Your rating: None
Original author: 
Jon Brodkin

The Linux Foundation has taken control of the open source Xen virtualization platform and enlisted a dozen industry giants in a quest to be the leading software for building cloud networks.

The 10-year-old Xen hypervisor was formerly a community project sponsored by Citrix, much as the Fedora operating system is a community project sponsored by Red Hat. Citrix was looking to place Xen into a vendor-neutral organization, however, and the Linux Foundation move was announced today. The list of companies that will "contribute to and guide the Xen Project" is impressive, including Amazon Web Services, AMD, Bromium, Calxeda, CA Technologies, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon.

Amazon is perhaps the most significant name on that list in regard to Xen. The Amazon Elastic Compute Cloud is likely the most widely used public infrastructure-as-a-service (IaaS) cloud, and it is built on Xen virtualization. Rackspace's public cloud also uses Xen. Linux Foundation Executive Director Jim Zemlin noted in his blog that Xen "is being deployed in public IaaS environments by some of the world's largest companies."

Read 4 remaining paragraphs | Comments

0
Your rating: None


Leaders in Big Data

Google Tech Talk October 22, 2012 ABSTRACT Discussing the evolution, current opportunities and future trends in big data Presented by Google and the Fung Institute at UC Berkeley SPEAKERS: Moderator: Hal Varian, an economist specializing in microeconomics and information economics. He is the Chief Economist at Google and he holds the title of emeritus professor at the University of California, Berkeley where he was founding dean of the School of Information. Panelists: Theo Vassilakis, Principal Engineer/Engineering Director at Google Gustav Horn, Senior Global Consulting Engineer, Hadoop at NetApp Charles Fan, Senior Vice President at VMware in strategic R&D
From:
GoogleTechTalks
Views:
4980

77
ratings
Time:
58:47
More in
Science & Technology

0
Your rating: None

Though virtual machines have become indispensable in the server room over the last few years, desktop virtualization has been less successful. One of the reasons has been performance, and specifically graphics performance—modern virtualization products are generally pretty good at dynamically allocating CPU power, RAM, and drive space as clients need them, but graphics performance just hasn't been as good as it is on an actual desktop.

NVIDIA wants to solve this problem with its VGX virtualization platform, which it unveiled at its GPU Technology Conference in May. As pitched, the technology will allow virtual machines to use a graphics card installed in a server to accelerate applications, games, and video. Through NVIDIA's VGX Hypervisor, compatible virtualization software (primarily from Citrix, though Microsoft's RemoteFX is also partially supported) can use the GPU directly, allowing thin clients, tablets, and other devices to more closely replicate the experience of using actual desktop hardware.

Enlarge / NVIDIA's VGX K1 is designed to bring basic graphics acceleration to a relatively large number of users. NVIDIA

When last we heard about the hardware that drives this technology, NVIDIA was talking up a board with four GPUs based on its Kepler architecture. That card, now known as the NVIDIA VGX K1, is built to provide basic 3D and video acceleration to a large number of users—up to 100, according to NVIDIA's marketing materials. Each of this card's four GPUs uses 192 of NVIDIA's graphics cores and 4GB of DDR3 RAM (for a total of 768 cores and 16GB of memory), and has a reasonably modest TDP of 150 watts—for reference, NVIDIA's high-end GTX 680 desktop graphics card has a TDP of 195W, and the dual-GPU version (the GTX 690) steps this up to 300W.

Read 7 remaining paragraphs | Comments

0
Your rating: None

An image displayed on a computer after it was successfully commandeered by Pinkie Pie during the first Pwnium competition in March.

Dan Goodin

A hacker who goes by "Pinkie Pie" has once again subverted the security of Google's Chrome browser, a feat that fetched him a $60,000 prize and resulted in a security update to fix underlying vulnerabilities.

Ars readers may recall Pinkie Pie from earlier this year, when he pierced Chrome's vaunted security defenses at the first installment of Pwnium, a Google-sponsored contest that offered $1 million in prizes to people who successfully hacked the browser. At the time a little-known reverse engineer of just 19 years, Pinkie Pie stitched together at least six different bug exploits to bypass an elaborate defense perimeter designed by an army of some of the best software engineers in the world.

At the second installment of Pwnium, which wrapped up on Tuesday at the Hack in the Box 2012 security conference in Kuala Lumpur, Pinkie Pie did it again. This time, his attack exploited two vulnerabilities. The first, against Scalable Vector Graphics functions in Chrome's WebKit browser engine, allowed him to compromise the renderer process, according to a synopsis provided by Google software engineer Chris Evans.

Read 5 remaining paragraphs | Comments

0
Your rating: None