Skip navigation
Help

PC world

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

larry page mark zuckerberg

The sands are shifting under tech industry heavyweights like Facebook and Google.

Users are spending more time with their smartphones and tablets and less time on their desktops and laptops.

This transition is happening so fast that the business models which support the industry are struggling to catch up. The evidence can be seen in Google’s Q3 earnings, where lower mobile ad rates deflated overall CPCs by 15 percent.

At the heart of the problem is a fundamental change in how we use our computing devices. The experience most users have when they are at their desk on their laptop is typically far more active than when they are using their tablet on the couch or their smartphone on the bus. Users are far more likely to plan a vacation or do their holiday shopping on their desktop or laptop than on their smartphone or tablet. As a result, advertisers and merchants have been reluctant to expand their marketing budgets to reach these new post-PC users.

In just a few years, a large percentage of consumers may no longer even own a traditional PC or laptop. This simple fact is sounding the general alarm inside ad-driven businesses like Google, Facebook and Twitter. While this is a frightening prospect for many industry heavyweights, it also represents an enormous opportunity for the companies who manage to crack the code on post-PC advertising.

So, what's the solution?

How do we bring the revenue potential for post-PC apps to parity with their cousins on the desktop and laptop? The first step is to understand the nature of the problem. Today, when users want to research products or make a transaction, they are more likely to put down their smartphone or tablet and open their laptop.

The reason for this is simple: performing detailed work on mobile and touch devices can be cumbersome. Not only is the screen real estate limited, but simple tasks like copy and paste, keyboard typing, app switching and web browsing are more laborious on touch devices than on desktops or laptops with a keyboard and mouse. As a result, users are habitually more passive and less interactive when using their post-PC devices.

We need better front-end apps and better back-end software to facilitate streamlined interaction on post-PC devices. The good news is that this is entirely possible, and it can be achieved by combining machine intelligence and predictive analytics with the abundance of contextual data available from post-PC devices.

Imagine you are having a conversation on your smartphone and your friend suggests that you watch the new James Bond movie tonight. Today, if you are on-the-go, you would probably wait until you reached your home or office before you got online and bought tickets. A couple years from now, I suspect that your phone, having understood your conversation and knowing your location, will automatically give you the option to purchase tickets at a local theater in one or two taps as soon as your call ends.

While the engineering required to realize an example like this is not trivial, it is most certainly achievable through clever application of technologies available today.

Specifically, applications will need to continuously analyze and better understand a variety of input data signals such as location, audio, and online activity streams. Second, applications will need better models in order to glean insights and make targeted recommendations based on this abundance of contextual data. Last, applications will need to perform proactive, real-time search and data gathering behind the scenes to intelligently narrow down all the available options to just the few we need.

With capabilities like these, it is not hard to image how the monetization potential of post-PC applications can far surpass the revenue models supporting companies like Google and Facebook today. Given the coming explosion in the number and variety of computing devices in our lives, this represents a huge opportunity for the organizations that can crack the code on this new generation of intelligent applications.

Don't Miss: 9 Lessons The Tech World Learned In 2012 >

Please follow SAI on Twitter and Facebook.

Join the conversation about this story »

0
Your rating: None

larry ellison

Editor’s note: Aaron Levie is CEO of Box. Follow him on Twitter @levie.

In 1997, Larry Ellison had a vision for a new paradigm of computing which he called the Network Computer (NC). The idea was simple: a group of partners would build devices and services that leveraged the power of the Internet to compete against the growing Windows monopoly.

Ellison believed that the computer in the client/server era had evolved into too complex a machine for most tasks. With the NC, the ‘heavy’ computation of software and infrastructure would be abstracted from the actual device and delivered instead to thinner terminals via the web, thus radically simplifying access and enabling all new applications and mobility.

But the NC never made it mainstream. Microsoft and its allies had already amassed considerable power, and the cost of personal computers was dropping rapidly, making them even more attractive and ubiquitous. Furthermore, many of the applications were too immature to compete with the desktop software experience at the time; and few people, as it turned out, wanted to buy a device championed by Oracle.

The NC fell short on execution, but Ellison was right about the vision: “It’s the first step beyond personal computing, to universal computing.” In many ways, he was the first to glimpse a future resembling the post-PC world we are rapidly moving towards today.

15 years later, it is Apple that has brought its version of this vision to life. And Apple’s rising tide – already 172 million devices strong, sold in the last year alone – has in turn given rise to a massive, vibrant ecosystem: companies generating hundreds of millions and billions of dollars in value in under a few years, revolutionizing industries like gaming, social networking, entertainment and communications in the process. Then of course there’s Instagram.  All proving that value created in this mobile and Post-PC world will rival traditional computing categories.

But the post-PC transformation isn’t limited to the consumer landscape. In the enterprise, we’re transitioning to a way of working that is far more fluid, boundary-less and social. And mobile pushes computing to the cloud and rewrites all applications in its wake. Those who saw it coming (Oracle) and those who initially resisted its arrival (Microsoft) have equally been taken by surprise by the power and speed of the post-PC shift within today’s enterprises, and it’s creating one of the biggest opportunities ever.

Why the change is so profound

We recently met with the IT leadership team of a fairly conservative 50,000-person organization where all the participants all had iPads. No big surprise there. But the apps they were using were radically different from what you would have found in their organization only a few years back – a mix of apps from a new set of vendors that together supplant the traditional Microsoft Office stack.

Post-PC devices are driving enterprises to rethink their entire IT architecture, thanks to a wildly unpredictable and improbable chain reaction set off by a new consumer device from Apple.  For the first time in decades, CIOs have the opportunity – and necessity – to completely re-imagine and rebuild their technology strategy from the ground up. Catalyzing this change is the fact that the technology switching costs are often less than the price of maintaining existing solutions. A shipment of 1,000 new iPads requires applications to run on these devices – and choosing all-new applications and vendors is generally cheaper than the service fees, infrastructure, and operational costs of legacy software.

And thus, the Post-PC era drives the final nail in the coffin of the traditional enterprise software hegemony. Microsoft, in particular, built up a practical monopoly that lasted nearly twenty years, and forced an entire industry to conform to its way of seeing the world.  Yet this arrangement served its benefactor far more than the ecosystem, as the Redmond giant built up leadership positions across nearly every application category.

In the Post-PC era, businesses will shift from deploying and managing end-to-end enterprise solutions from a single vendor, to consuming apps a la carte both as individuals and en masse. But which apps and vendors will help define this new world?

What’s coming won’t look like what came before

Change always begins incrementally at first. Predicting specifically what will happen in the next year or two is a far more realistic undertaking than anticipating where we’ll be in a decade. In shifting from one technology generation to the next, we minimize disruption by porting the old way of doing things to newer mediums or channels. Not until the new model settles in do we see the real results that rise from these foundational shifts.

Mobility is such a foundational shift, and it’s still very, very early. Even when the Microsofts and Oracles of the world relent and build applications for post-PC devices, these apps will carry much of the DNA of their desktop predecessors. We can imagine that each of the enterprise mainstays – ERP, HR management, supply chain, business intelligence, and office productivity – will be painstakingly moved to mobile. But that’s just the first phase.

Emerging CRM startups like Base will challenge longstanding assumptions about where and how you manage customer interactions. Data visualization software like Roambi will make business analytics more valuable by making it available everywhere. Entire industries are already being transformed: mobile healthcare apps will enable cutting-edge health outcomes, and construction sites will eventually be transformed by apps like PlanGrid.  Companies like CloudOn and Onlive aim to virtualize applications that we never imagined would be available outside the office walls. Evernote’s 20+ million users already make it one of the most popular independent productivity software apps of all time, whose value is dramatically amplified by this revolution.  In a mobile and Post-PC world, the very definition of the office suite is transformed.

And with this transformation, much of the $288B spent annually on enterprise software is up for grabs.  The post-PC era is about no longer being anchored to a handful of solutions in the PC paradigm. Instead, we’re moving to a world where we mix and match best-of-breed solutions. This means more competition and choice, which means new opportunities for startups, which should mean more innovation for customers. As soon as individual workers look to the App Store for an immediate solution to their problem instead of calling IT (who in turn calls a vendor) you can tell things will never be the same.

In many ways, the enterprise software shift mirrors that of the media and cable companies fighting for relevance in a world moving to digital content (HT @hamburger). If users and enterprises can select apps that are decoupled from an entire suite, we might find they’d use a completely different set of technology, just as many consumers would only subscribe to HBO or Showtime if given the option.

Of course, every benefit brings a new and unique challenge. In a world where users bring their own devices into the workplace, connect to any network, and use a mix of apps, managing and securing business information becomes an incredibly important and incredibly challenging undertaking. Similarly, how do we get disparate companies to build apps that work together, instead of spawning more data silos?  And as we move away from large purchases of suites from a single provider, what is the new business model that connects vendors with customers (both end users and IT departments) with minimal friction?

And then there’s the inherent fragmentation of devices and platforms that defines the post-PC era. Android, iOS, and Windows 7 and 8 all have different languages and frameworks, UI patterns, and marketplaces. The fate of mobile HTML5 is still indeterminate. Fragmentation and sprawl of apps and data is now the norm. And while this fragmentation is creating headaches for businesses and vendors alike, it’s also opening a window for the next generation of enterprise software leaders to emerge and redefine markets before the industry settles into this new paradigm.

It would appear that Larry Ellison’s vision for the NC was right all along, just 15 years early. Welcome to the post-PC enterprise.

0
Your rating: None

What was Microsoft's original mission?

In 1975, Gates and Allen form a partnership called Microsoft. Like most startups, Microsoft begins small, but has a huge vision – a computer on every desktop and in every home.

The existential crisis facing Microsoft is that they achieved their mission years ago, at least as far as the developed world is concerned. When was the last time you saw a desktop or a home without a computer? 2001? 2005? We're long since past the point where Microsoft's original BHAG was met, and even exceeded. PCs are absolutely ubiquitous. When you wake up one day to discover that you've completely conquered the world … what comes next?

Apparently, the Post PC era.

Microsoft never seemed to recover from the shock of achieving their original 1975 goal. Or perhaps they thought that they hadn't quite achieved it, that there would always be some new frontier for PCs to conquer. But Steve Jobs certainly saw the Post PC era looming as far back as 1996:

The desktop computer industry is dead. Innovation has virtually ceased. Microsoft dominates with very little innovation. That's over. Apple lost. The desktop market has entered the dark ages, and it's going to be in the dark ages for the next 10 years, or certainly for the rest of this decade.

If I were running Apple, I would milk the Macintosh for all it's worth – and get busy on the next great thing. The PC wars are over. Done. Microsoft won a long time ago.

What's more, Jobs did something about it. Apple is arguably the biggest (and in terms of financials, now literally the biggest) enemy of general purpose computing with the iPhone and iPad. These days, their own general purpose Mac operating system, OS X, largely plays second fiddle to the iOS juggernaut powering the iPhone and iPad.

Here's why:

Apple-cumulative-sales

The slope of this graph is the whole story. The complicated general purpose computers are at the bottom, and the simpler specialized computers are at the top.

I'm incredibly conflicted, because as much as I love the do-anything computer …

  • I'm not sure that many people in the world truly need a general purpose computer that can do anything and install any kind of software. Simply meeting the core needs of browsing the web and email and maybe a few other basic things covers a lot of people.
  • I believe the kitchen-sink-itis baked into the general purpose computing foundations of PCs, Macs, and Unix make them fundamentally incompatible with our brave new Post PC world. Updates. Toolbars. Service Packs. Settings. Anti-virus. Filesystems. Control panels. All the stuff you hate when your Mom calls you for tech support? It's deeply embedded into of the culture and design of every single general purpose computer. Doing potentially "anything" comes at a steep cost in complexity.
  • Very, very small PCs – the kind you could fit in your pocket – are starting to have the same amount of computing grunt as a high end desktop PC of, say, 5 years ago. And that was plenty, even back then, for a relatively inefficient general purpose operating system.

But the primary wake up call, at least for me, is that the new iPad finally delivered an innovation that general purpose computing has been waiting on for thirty years: a truly high resolution display at a reasonable size and price. In 2007 I asked where all the high resolution displays were. Turns out, they're only on phones and tablets.

iPad 2 display vs iPad 3 display

That's why I didn't just buy the iPad 3 (sorry, The New iPad). I bought two of them. And I reserve the right to buy more!

iPad 3 reviews that complain "all they did was improve the display" are clueless bordering on stupidity. Tablets are pretty much by definition all display; nothing is more fundamental to the tablet experience than the quality of the display. These are the first iPads I've ever owned (and I'd argue, the first worth owning), and the display is as sublime as I always hoped it would be. The resolution and clarity are astounding, a joy to read on, and give me hope that one day we could potentially achieve near print resolution in computing. The new iPad screen is everything I've always wanted on my desktops and laptops for the last 5 years, but I could never get.

Don't take my word for it. Consider what screen reading pioneer, and inventor of ClearType, Bill Hills has to say about it:

The 3rd Generation iPad has a display resolution of 264ppi. And still retains a ten-hour battery life (9 hours with wireless on). Make no mistake. That much resolution is stunning. To see it on a mainstream device like the iPad - rather than a $13,000 exotic monitor - is truly amazing, and something I've been waiting more than a decade to see.

It will set a bar for future resolution that every other manufacturer of devices and PCs will have to jump.

And the display calibration experts at DisplayMate have the measurements and metrics to back these claims up, too:

… the new iPad’s picture quality, color accuracy, and gray scale are not only much better than any other Tablet or Smartphone, it’s also much better than most HDTVs, laptops, and monitors. In fact with some minor calibration tweaks the new iPad would qualify as a studio reference monitor.

Granted, this is happening on tiny 4" and 10" screens first due to sheer economics. It will take time for it to trickle up. I shudder to think what a 24 or 27 inch display using the same technology as the current iPad would cost right now. But until the iPhone and iPad, near as I can tell, nobody else was even trying to improve resolution on computer displays – even though all the existing HCI research tells us that higher resolution displays are a deep fundamental improvement in computing.

At the point where these simple, fixed function Post-PC era computing devices are not just "enough" computer for most folks, but also fundamentally innovating in computing as a whole … well, all I can say is bring on the post-PC era.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.

0
Your rating: None

The web as we know and build it has primarily been accessed from the desktop. That is about to change. The ITU predicts that in the next 18–24 months, mobile devices will overtake PCs as the most popular way to access the web. If these predictions come true, very soon the web—and its users—will be mostly mobile. Even designers who embrace this change can find it confusing. One problem is that we still consider the mobile web a separate thing. Stephanie Rieger of futurefriend.ly and the W3C presents principles to understand and design for a new normal, in which users are channel agnostic, devices are plentiful, standards are fleeting, mobile use doesn’t necessarily mean “hide the desktop version,” and every byte counts.

0
Your rating: None

In computing, a hypervisor, also called virtual machine manager (VMM), is one of many hardware virtualization techniques allowing multiple operating systems, termed guests, to run concurrently on a host computer. It is so named because it is conceptually one level higher than a supervisory program. The hypervisor presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources. Hypervisors are installed on server hardware whose only task is to run guest operating systems.

The term can be used to describe the interface provided by the specific cloud computing functionality infrastructure as a service (IaaS).[1][2]

The term "hypervisor" was first used in 1965, referring to software that accompanied an IBM RPQ for the IBM 360/65. It allowed the model IBM 360/65 to share its memory: half acting like a IBM 360; half as an emulated IBM 7080. The software, labeled "hypervisor," did the switching between the 2 modes on split time basis. The term hypervisor was coined as an evolution of the term "supervisor," the software that provided control on earlier hardware.[3][4]

Classification

Robert P. Goldberg classifies two types of hypervisor:[5]

  • Type 1 (or native, bare metal) hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. A guest operating system thus runs on another level above the hypervisor.
This model represents the classic implementation of virtual machine architectures; the original hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's z/VM. A modern equivalent of this is the Citrix XenServer, VMware ESX/ESXi, and Microsoft Hyper-V hypervisor.
  • Type 2 (or hosted) hypervisors run within a conventional operating system environment. With the hypervisor layer as a distinct second software level, guest operating systems run at the third level above the hardware. KVM and VirtualBox are examples of Type 2 hypervisors.

In other words, Type 1 hypervisor runs directly on the hardware; a Type 2 hypervisor runs on another operating system, such as FreeBSD[6] or Linux[7].

Note: Microsoft Hyper-V (released in June 2008)[8] exemplifies a type 1 product that can be mistaken for a type 2. Both the free stand-alone version and the version that is part of the commercial Windows Server 2008 product use a virtualized Windows Server 2008 parent partition to manage the Type 1 Hyper-V hypervisor. In both cases the Hyper-V hypervisor loads prior to the management operating system, and any virtual environments created run directly on the hypervisor, not via the management operating system.

Hyperviseur.png

Mainframe origins

The first hypervisor providing full virtualization, IBM's one-off research CP-40 system, began production use in January 1967, and became the first version of IBM's CP/CMS operating system. CP-40 ran on a S/360-40 that was modified at the IBM Cambridge Scientific Center to support Dynamic Address Translation, a key feature that allowed virtualization. Prior to this time, computer hardware had only been virtualized enough to allow multiple user applications to run concurrently (see CTSS and IBM M44/44X). With CP-40, the hardware's supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.

Programmers soon re-implemented CP-40 (as CP-67) for the IBM System/360-67, the first production computer-system capable of full virtualization. IBM first shipped this machine in 1966; it included page-translation-table hardware for virtual memory, and other techniques that allowed a full virtualization of all kernel tasks, including I/O and interrupt handling. (Note that its "official" operating system, the ill-fated TSS/360, did not employ full virtualization.) Both CP-40 and CP-67 began production use in 1967. CP/CMS was available to IBM customers from 1968 to 1972, in source code form without support.

CP/CMS formed part of IBM's attempt to build robust time-sharing systems for its mainframe computers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this even allowed beta or experimental versions of operating systems – or even of new hardware[9] – to be deployed and debugged, without jeopardizing the stable main production system, and without requiring costly additional development systems.

IBM announced its System/370 series in 1970 without any virtualization features, but added them in the August 1972 Advanced Function announcement. Virtualization has been featured in all successor systems. (All modern-day (as of 2009[update]) IBM mainframes, such as the zSeries line, retain backwards-compatibility with the 1960s-era IBM S/360 line.) The 1972 announcement also included VM/370, a reimplementation of CP/CMS for the S/370. Unlike CP/CMS, IBM provided support for this version (though it was still distributed in source code form for several releases). VM stands for Virtual Machine, emphasizing that all, and not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, and time-sharing vendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modern open source projects. However, in a series of disputed and bitter battles, time-sharing lost out to batch processing through IBM political infighting, and VM remained IBM's "other" mainframe operating system for decades, losing to MVS. It enjoyed a resurgence of popularity and support from 2000 as the z/VM product, for example as the platform for Linux for zSeries.

As mentioned above, the VM control program includes a hypervisor-call handler which intercepts DIAG ("Diagnose") instructions used within a virtual machine. This provides fast-path non-virtualized execution of file-system access and other operations. (DIAG is a model-dependent privileged instruction, not used in normal programming, and thus is not virtualized. It is therefore available for use as a signal to the "host" operating system.) When first implemented in CP/CMS release 3.1, this use of DIAG provided an operating system interface that was analogous to the System/360 SVC ("supervisor call") instruction, but that did not require altering or extending the system's virtualization of SVC.

In 1985 IBM introduced the PR/SM hypervisor to manage logical partitions (LPAR).

UNIX and Linux servers

Several factors led to a resurgence around 2005[10] in the use of virtualization technology among UNIX and Linux server vendors:

  • expanding hardware capabilities, allowing each single machine to do more simultaneous work
  • efforts to control costs and to simplify management through consolidation of servers
  • the need to control large multiprocessor and cluster installations, for example in server farms and render farms
  • the improved security, reliability, and device independence possible from hypervisor architectures
  • the ability to run complex, OS-dependent applications in different hardware or OS environments

Major UNIX vendors, including Sun Microsystems, HP, IBM, and SGI, have been selling virtualized hardware since before 2000. These have generally been large systems with hefty, server-class price-tags (in the multi-million dollar range at the high end), although virtualization is also available on some mid-range systems, such as IBM's System-P servers, Sun's CoolThreads T1000, T2000 and T5x00 servers and HP Superdome series.

Multiple host operating systems have been modified[by whom?] to run as guest OSes on Sun's Logical Domains Hypervisor. As of late 2006[update], Solaris, Linux (Ubuntu and Gentoo), and FreeBSD have been ported to run on top of Hypervisor (and can all run simultaneously on the same processor, as fully virtualized independent guest OSes). Wind River "Carrier Grade Linux" also runs on Sun's Hypervisor.[11] Full virtualization on SPARC processors proved straightforward: since its inception in the mid-1980s Sun deliberately kept the SPARC architecture clean of artifacts that would have impeded virtualization. (Compare with virtualization on x86 processors below.)[12]

HP calls its technology to host multiple OS technology on its Itanium powered systems (Integrity) "Integrity Virtual Machines" (Integrity VM). Itanium can run HP-UX, Linux, Windows and OpenVMS. Except for OpenVMS, to be supported in a later release, these environments are also supported as virtual servers on HP's Integrity VM platform. The HP-UX operating system hosts the Integrity VM hypervisor layer which allows for many important features of HP-UX to be taken advantage of and provides major differentiation between this platform and other commodity platforms - such as processor hotswap, memory hotswap, and dynamic kernel updates without system reboot. While it heavily leverages HP-UX, the Integrity VM hypervisor is really a hybrid that runs on bare-metal while guests are executing. Running normal HP-UX applications on an Integrity VM host is heavily discouraged[by whom?], because Integrity VM implements its own memory management, scheduling and I/O policies that are tuned for virtual machines and are not as effective for normal applications. HP also provides more rigid partitioning of their Integrity and HP9000 systems by way of VPAR and nPar technology, the former offering shared resource partitioning and the later offering complete I/O and processing isolation. The flexibility of virtual server environment (VSE) has given way to its use more frequently in newer deployments.[citation needed]

IBM provides virtualization partition technology known as logical partitioning (LPAR) on System/390, zSeries, pSeries and iSeries systems. For IBM's Power Systems, the Power Hypervisor (PowerVM) functions as a native (bare-metal) hypervisor and provides EAL4+ strong isolation between LPARs. Processor capacity is provided to LPARs in either a dedicated fashion or on an entitlement basis where unused capacity is harvested and can be re-allocated to busy workloads. Groups of LPARs can have their processor capacity managed as if they were in a "pool" - IBM refers to this capability as Multiple Shared-Processor Pools (MSPPs) and implements it in servers with the POWER6 processor. LPAR and MSPP capacity allocations can be dynamically changed. Memory is allocated to each LPAR (at LPAR initiation or dynamically) and is address-controlled by the POWER Hypervisor. For real-mode addressing by operating systems (AIX, Linux, IBM i), the POWER processors (POWER4 onwards) have architected virtualization capabilities where a hardware address-offset is evaluated with the OS address-offset to arrive at the physical memory address. Input/Output (I/O) adapters can be exclusively "owned" by LPARs or shared by LPARs through an appliance partition known as the Virtual I/O Server (VIOS). The Power Hypervisor provides for high levels of reliability, availability and serviceability (RAS) by facilitating hot add/replace of many parts (model dependent: processors, memory, I/O adapters, blowers, power units, disks, system controllers, etc.)

Similar trends have occurred with x86/x86_64 server platforms, where open-source projects such as Xen have led virtualization efforts. These include hypervisors built on Linux and Solaris kernels as well as custom kernels. Since these technologies span from large systems down to desktops, they are described in the next section.

PCs and desktop systems

Interest in the high-profit server-hardware market sector has led to the development of hypervisors for machines using the Intel x86 instruction set, including for traditional desktop PCs. One of the early PC hypervisors, the commercial-software VMware, debuted in 1998.

The x86 architecture used in most PC systems poses particular difficulties to virtualization. Full virtualization (presenting the illusion of a complete set of standard hardware) on x86 has significant costs in hypervisor complexity and run-time performance. Starting in 2005, CPU vendors have added hardware virtualization assistance to their products, for example: Intel's Intel VT-x (codenamed Vanderpool) and AMD's AMD-V (codenamed Pacifica). These extensions address the parts of x86 that are difficult or inefficient to virtualize, providing additional support to the hypervisor. This enables simpler virtualization code and a higher performance for full virtualization.

An alternative approach requires modifying the guest operating-system to make system calls to the hypervisor, rather than executing machine I/O instructions which the hypervisor then simulates. This is called paravirtualization in Xen, a "hypercall" in Parallels Workstation, and a "DIAGNOSE code" in IBM's VM. VMware supplements the slowest rough corners of virtualization with device drivers for the guest. All are really the same thing, a system call to the hypervisor below. Some microkernels such as Mach and L4 are flexible enough such that "paravirtualization" of guest operating systems is possible.

In June 2008 Microsoft delivered a new Type-1 hypervisor called Hyper-V (codenamed "Viridian" and previously referred to as "Windows Server virtualization"); the design features OS integration at the lowest level.[13] Versions of the Windows operating system beginning with Windows Vista include extensions to boost performance when running on top of the Hyper-V hypervisor.

Embedded systems

As of 2009[update] virtual machines have started to appear in embedded systems, such as mobile phones. This provides a high-level operating-system interface for application programming, such as Linux or Microsoft Windows, while at the same time maintaining traditional real-time operating system (RTOS) APIs. The low-level RTOS environments need to be retained for legacy support, and because the real-time capabilities of high-level OSes are insufficient for many embedded applications.

Embedded hypervisors must therefore have real-time capability, a design criterion not present for hypervisors used in other domains. The resource-constrained nature of many embedded systems, especially battery-powered mobile systems, imposes a further requirement for small memory-size and low overhead. Finally, in contrast to the ubiquity of the x86 architecture in the PC world, the embedded world uses a wider variety of architectures. Support for virtualization requires memory protection (in the form of a memory management unit or at least a memory protection unit) and a distinction between user mode and privileged mode, which rules out most microcontrollers. This still leaves x86, MIPS, ARM and PowerPC as widely deployed architectures on medium- to high-end embedded systems.

As manufacturers of embedded systems usually have the source code to their operating systems, they have less need for full virtualization in this space. Instead, the performance advantages of paravirtualization make this usually the virtualization technology of choice. Nevertheless, ARM has recently added full virtualization support as an IP option and has included it in their latest high end processor codenamed Eagle.

Other differences between virtualization in server/desktop and embedded environments include requirements for efficient sharing of resources across virtual machines, high-bandwidth, low-latency inter-VM communication, a global view of scheduling and power management, and fine-grained control of information flows.[14]

Security implications

The use of hypervisor technology by malware and rootkits installing themselves as a hypervisor below the operating system can make them more difficult to detect because the malware could intercept any operations of the operating system (such as someone entering a password) without the antivirus software necessarily detecting it (since the malware runs below the entire operating system). Implementation of the concept has allegedly occurred in the SubVirt laboratory rootkit (developed jointly by Microsoft and University of Michigan researchers[15]) as well as in the Blue Pill malware package. However, such assertions have been disputed by others who claim that it would indeed be possible to detect the presence of a hypervisor-based rootkit.[16]

In 2009, researchers from Microsoft and North Carolina State University demonstrated a hypervisor-layer anti-rootkit called Hooksafe that can provide generic protection against kernel-mode rootkits.[17]

See also

0
Your rating: None