Windows Server Virtualisation: Hyper-V, an Introduction

For SQL Server and Exchange Server, Windows Server Virtualization is going to be increasingly important as a way for the administrator to allocate hardware resources in the most efficient way, to offer more robust services, and for deploying services. Jaap Wesellius starts his new series on Hyper-V by explaining what Hyper-V is, how it relates to Windows Server 2008 and how it compares to ESX, Virtual Server and Virtual PC. .

Hyper-V Introduction

Microsoft released Hyper-V, its hypervisor based virtualization product, in the summer of 2008; but what’s the difference with Virtual Server? And why is Hyper-V a better product than Virtual Server? And what’s the difference with VMware ESX for example? In a series of articles I’ll try to explain what Hyper-V is, how it relates to other products and try to give some best practices regarding the use of Hyper-V.

Windows Architecture

Before we take a look at the Hyper-V architecture we take a look at the Windows Server 2008 (and basically all Windows NT servers) architecture.  When Window Server 2008 is installed on appropriate hardware two modes can be identified:

  • Kernel mode – this is a protected space where the kernel, the “heart” of Windows Server 2008 is running and where processes run that interact directly with the hardware, for example the device drivers using buffers allocated in kernel mode. When such a process crashes it is very likely that the server will crash as well which will result in a blue screen of death;
  • User mode – this is a more protected space where applications are running, for example Microsoft Office, or SQL Server, or Exchange Server. When an application in User Mode crashes only the application stops and the server continues running.

730-HyperV5.jpg

Figure   1. User and Kernel mode running under Windows Server 2008

When an application needs to access a piece of hardware, for example the hard disk or the network interface the application needs to communicate with the appropriate driver running in Kernel mode. Switching from User mode to Kernel mode is a costly process and consumes a considerable amount of processor cycles. This is known as “mode switching”.

Virtual Server and Virtual PC are applications and as such are running in User Mode, the complete environment where the Virtual Machine is running in is emulated. After installing the Virtual Machine additions or when using Hardware Assisted Virtualization some kernel processes are handled directly by the processor. Every piece of hardware the Virtual Machine has to access has to go from User Mode to Kernel Mode and vice versa. The overhead in this scenario is large and will have a large performance impact. The same is true for VMware Server and VMware workstation.

Hyper-V Architecture

Hyper-V is a so called hypervisor. The hypervisor is installed between the hardware and the operating system. Hyper-V is a role in Windows Server 2008 and can only be installed after Windows Server 2008 is installed. When installing the Hyper-V role the hypervisor is “slid” between the hardware and the operating system. Besides the hypervisor a little more is installed as well. The VMBus is installed which is running in kernel mode as well as a Virtual Storage Provider (VSP). Furthermore a WMI provider is installed which is running in User Mode. A VMWorker process is spawn for every Virtual Machine that’s started when Hyper-V is running.

Note Hyper-V is only available on Windows Server 2008 X64 edition. Besides X64 capable hardware the server should support hardware virtualization and Data Execution Prevention (DEP) should be enabled on the server. The server’s BIOS should support these settings as well.

After installing the Hyper-V role in Windows Server 2008 the server needs to be rebooted and the server is operational. The original Windows Server 2008 that was installed is turned into a Virtual Machine as well, this one is called the “root” or the “parent partition”. It is a very special Virtual Machine since it controls the other Virtual Machines running on the server. I’ll get back to this later in this article.

Virtual Machines and the parent partition on Hyper-V are running side-by-side as shown in Figure 2. Virtual Machines are called “child partitions”. There are three types of Virtual Machines:

  • Hypervisor aware Virtual Machine like Windows Server 2003 and Windows Server 2008;
  • Non-hypervisor aware Virtual Machines like Windows Server 2000 and Windows NT4. These Virtual Machines run in an emulated environment;
  • Xen enabled Linux kernels (which also support the VMBus architecture). The only one that’s available as a standard distribution at this point is SUSE Linux.

730-HyperV6.jpg

Figure   2. The parent partitions and Virtual Machines in Hyper-V

Now we’re installing a Virtual Machine based on Windows Server 2008. This child partition is running on top of the hypervisor. When the Integration Components are installed the new Virtual Machine can fully utilize the power of Hyper-V. The Integration Components are special Hyper-V drivers, the so called synthetic drivers. Also a Virtual Storage Client (VSC) is installed in the Virtual Machine. These drivers can use the VMBus structure. The VMBus is a point-to-point in-memory bus architecture, running fully in kernel mode. An application running in this Virtual Machine wants to access the network interface or a local disk on the parent partition and makes a request to do so. This request goes from user mode to kernel mode and is sent via the VSC over the VMBus to the VSP. From here the request is sent to the appropriate device. No additional mode switching is needed and this is truly a very fast solution.

A non hypervisor-aware Virtual Machine, for example a Windows NT4 server does not have the Integration Components and a VSC. Everything is emulated, and it is emulated in the VMWorker processes. These processes are running in user mode on the parent partition.

When an application on this Virtual Machine make a request to the local disk the request is sent to the driver running in kernel mode in the Virtual Machine. This is intercepted and sent to the emulator on the parent partition which in turn sends it to the local disk. This means that three additional mode switches are needed. One in the Virtual Machine, from the Virtual Machine to the host partition and on the actual host partition from user mode to kernel mode. This creates additional overhead which results in reduced performance for emulated Virtual Machine. Virtual Server also makes use of a fully emulated environment and thus suffers from the same performance hit.

Virtual Machines running on SUSE Linux and have the Linux Integration Components installed can also fully utilize the new VMBus architecture and thus fully utilize the server’s resources. Other Linux clients use a fully emulated Virtual Machine, just like the NT4 example.

Micro-kernelized hypervisor

One difference between ESX and Hyper-V is the type of hypervisor. Microsoft uses a micro-kernelized hypervisor where VMware uses a monolithic hypervisor. So what are the differences between these two?

A micro-kernelized hypervisor is a very thin hypervisor (less then 800 Kb) when an absolute minimum of software in the hypervisor. Drivers, memory management etc. needed for the Virtual Machines are installed in the parent partition. This means that Windows Server 2008 with the appropriate, certified hardware drivers can be used for a Hyper-V server.

A monolithic hypervisor is a hypervisor that contains more software and management interfaces. Network drivers and disk drivers for example are part of the hypervisor and not of the parent partition. This automatically means that only servers that are certified by VMware and have certified drivers can be used for an ESX Server.

730-HyperV7.jpg

Figure   3. Monolithic versus Micro-kernelized Hypervisor

Both solution have pros and cons, time will tell which solution is the best one and offers the best performance and scalability.

Security

After installing the Hyper-V role in Windows Server 2008 the original Windows installation automatically turns into a Virtual Machine, the so called parent partition or root. After logging in to the parent partition this just looks like an ordinary Windows Server 2008. But it controls all other Virtual Machines running on the server, so special care needs to be taken.

When the parent partition is compromised with a virus or a Trojan horse not only the parent partition is under somebody else’s control, but potentially all Virtual Machines running on this server. The Hyper-V manager is available on this server as well as all WMI interfaces that control the Virtual Machines running on this server. It is a best practice to install no other software on the parent partition and not use it for example for browsing on the Internet. All applications and software should be installed on Virtual Machines and NOT on the parent partition.

A better solution is to use Windows Server 2008 Server Core. This is very minimalistic instance of Windows Server 2008 with few software or services installed. Also the explorer is not present on the Server Core and after logging in to this Server Core only a Command Prompt is shown. Some small GUI’s are available though, for example the data-time applet to set the data and time on the server.  Managing a Windows Server 2008 Server Core is definitely more difficult than management a ‘normal’ server with a Graphical User Interface (GUI) but once you’re used to it and can fully manage it is much safer due to the reduced attack surface.

Microsoft made a couple of design decisions with respect to security. Not using shared memory for example is such a decision. When using shared memory you can over commit memory on your host server. Over committing is assigning more memory to Virtual Machines than there’s available on the host server. By sharing memory pages between Virtual Machines it is possible to achieve this. Although this is definitely true it was a security decision made by Microsoft to not use this feature.

Virtual Machines can be compromised as well and this is also a situation you do not want to occur. But when a Virtual Machine is compromised it is not possible to access the hypervisor to take over the host server. It is also not possible to access other Virtual Machines.

This also means that when you have to copy data from one Virtual Machine to another it’s just like physical machines. You have to copy this data across the network using file shares. The only option that’s possible is to copy plain text between your Parent Partition and a Virtual Machine using the “Copy Text” option in the Hyper-V Manager.

Integration Components

When installing a Virtual Machine initially this is running in an emulated environment. As explained earlier this is not the most efficient way of running a Virtual Machine. After the initial installation you have to install the Integration Components. Open the Hyper-V Manager, select the Virtual Machine, choose Action and select “Insert Integration Services Setup Disk”. This will install the Integration Components in your Virtual Machine. When finished reboot the Virtual Machine and it’s done.

 When installing the Integration Components the synthetic drivers are installed in the Virtual Machine, making it possible to have the Virtual Machine communicate via the VMBus architecture. This will speed up performance dramatically. You can see the Integration Components using the Virtual Machine’s device manager:

730-hyper-v.jpg

Figure   4. After installing the Integration Components the emulated hardware is replaced by Hyper-V specific hardware

Besides the synthetic drivers the Integration Components offer more services to Virtual Machines, like time synchronization between the root partition and the Virtual Machine, backup options (volume snapshot) and operating system shutdown from the Hyper-V Manager.

Server Virtualization Validation Program

Microsoft has always been reluctant in supporting virtualized application, especially in the timeframe before Hyper-V. In those days Microsoft only had Virtual Server as virtualization software while VMware was offering ESX Server.

When Hyper-V entered the virtualization market Microsoft had not only to support their own software and application running on Hyper-V, but also their applications running on other virtualization software, from other vendors that is.  Microsoft has setup a program where other vendors can have their solutions validated, this program is known as the Server Virtualization Validation Program (SVVP). VMware’s ESX Server for example is validated in this program and all recommendations made for running Microsoft applications under Hyper-V also apply for running these applications under ESX Server. When issues are submitted by customers in Microsoft Product Support Services Microsoft does not make a difference between ESX Server and Hyper-V when it comes to troubleshooting. You can find more information regarding the SVVP program on the Microsoft website: http://www.windowsservercatalog.com/svvp.aspx

Conclusion

Microsoft Windows Server 2008 Hyper-V was released in the summer of 2008 and is Microsoft first real hypervisor virtualization solution. It is not an emulated environment like Virtual Server or Virtual PC, but as a hypervisor solution it “sits” between the hardware and the Operating System. With the Integration Components installed you can fully use the functionality offered by Hyper-V. You have to secure the Parent Partition as much as possible to prevent compromising the complete system.

In the next articles I will talk more about the Hyper-V best practices, deploying Virtual Machines, using the System Center Virtual Machine Manager (VMM) 2008 and the “high availability” options and why these aren’t really high available in the current release of Hyper-V.

Tags: , , , , , , , , , , ,

  • 24850 views

  • Rate
    [Total: 51    Average: 4.4/5]
  • Anonymous

    driver crashs
    Is it true that if a driver crashes in windows, it brings down all of the VMs running on it???

  • Jaap Wesselius

    Driver crash
    if the driver crashes in a virtual machine than it affects only that particular machine. If it’s a driver in the parent partition all kinds of nasty things can occur. If the parent partions blue screens the complete server will reboot, bringing down all virtual machines.
    This is why one shouldn’t install applications and stuff in the parent partitions.

  • blacksea

    This is an excellent overview
    Thank you

  • Anthony

    Hyper-V and PowerShell
    We have implemented several virtual machines under WS2008 and Hyper-V. It becomes “difficult” to handle all these resources, even by using the Hyper-V console. I’ve read about PowerShell for creating an automated environment for seting up/down or refresh/modify virtual machines. Is there an already created solution I can use for this purpose (I ment, instead of creating all the stuff from the scratch)? Thanks.

  • Anthony

    Hyper-V and PowerShell
    We have implemented several virtual machines under WS2008 and Hyper-V. It becomes “difficult” to handle all these resources, even by using the Hyper-V console. I’ve read about PowerShell for creating an automated environment for seting up/down or refresh/modify virtual machines. Is there an already created solution I can use for this purpose (I ment, instead of creating all the stuff from the scratch)? Thanks.

  • Jaap Wesselius

    Hyper-V and Powershell
    right now Powershell is only supported throught Virtual Machine Manager 2008, maybe you should check this out for management purposes.
    VMM can work together with SCOM 2007 as well, this way you can trigger on alerts and start all kinds of things on the VMM server 🙂

  • nesy

    Hyper-V, Clustering and SQL Server 2008
    What are the pros and cons of combining Clustering and Virtualisation. I know MS has recently supported this but how is it done in practice? Are all Virtual environments on one server clustered to another server, or is clustering used inside a single server between different virtual servers? – i.e. which is Upper level Cluster of Virtual?
    I am reluctantly being pushed down the Virtual route in a SQL Server environment. The article above shows some cons, what are the pros other than rack space?