A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.

If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.

Question #1 by Andrew D.

How would you compare the product offering of VMWare to those of its key competitors, whit kind of performance hit can I expect running Windows from within a virtualized environment, are there any advantages/disadvantages for leveraging an Intel platform as opposed to an AMD one for a VMWare solution?

Answer #1 by Johan de Gelas, AnandTech Senior IT Editor

The performance hit depends of course on your application and your hardware. I am going to assume your server is a recent one, with support for hardware accelerated pages and hardware virtualization. You can get an idea of the performance hit by looking at perfmon and the taskmanager of Windows. In the performance tab of the task manager you can enable "show kernel times". The more time your application spends in the kernel, the higher performance hit. The performance hit also depends on the amount of I/O that you have going on.

If your app spends a lot of time in the kernel and has high amounts of I/O going on, the performance hit may be high (15-30%). But that does mean your application will have to suffer this performance hit. If you spend more time on optimizing (database buffering, jumbo frames) and if you use paravirtualized drivers (VMXnet, PVSCSI) the performance will get a lot smaller (5-10%). In short, performance hit can be high if you just throw your native application in a VM, but modern hypervisors are able to keep the performance hit very small if you make the right choices and you take some time to tune the app and the VM.

If your application is not I/O intensive, but mostly CPU intensive, the performance hit can be unnoticeable (1-2%).

AMD versus Intel: we have numerous articles on that on Anandtech. There are two areas where Intel has an objective advantage. The first one is licensing. The twelve-core AMD Opteron 6100 and six-core Xeon 5600 perform more or less the same. However if you like to buy VMware vSphere essentials (which is an interesting option if you can run your services on 3 servers) you get a license for 3 servers, 2 CPUs per servers and 6 cores per CPU. You have buy additional licences if you have more cores per CPU.

If your IT strategy involves buying servers with the best RAS capabilities out there, Intel has also an advantage. Servers based on the Xeon 7500 series have the best RAS features available in the x86 space and can also address the most memory. These servers need more power than typical x86 servers, but you can consolidate more VMs on them.

For all other cases, and that is probably 80-90% of the market, only one suggestion: read our comparisons in the IT section of Anandtech :-). The situation can quickly change.

Question #2 by Colin R.

How is the performance of virtualization of high throughput devices like networking and storage developing?

Answer #2 by Rich Uhlig, Intel Fellow

One trend is that new standards are being developed to make I/O devices more “virtualization friendly”. For example, the PCI-SIG has developed a specification for PCI-Express devices to make their resources more easily shareable among VMs. The specification – called “Single Root I/O Virtualization” (or SR-IOV for short) – defines a way for devices to expose multiple “virtual functions” (VFs) that can be independently and directly assigned to guest OSes running within VMs, and remove some of the overheads of virtualization in the process. As an example, Intel supports SR-IOV in our recent network adaptors. A big challenge with direct assignment of I/O devices is that it can complicate other important virtualization capabilities like VM migration, since exposing a physical I/O resource directly to a guest OS can make it harder to detach from the resource when moving VM state to another physical platform. We’ve been working with VMM vendors to tackle these issues so that we can get the performance benefits of direct I/O assignment through SR-IOV, while preserving the ability to do VM migration.

Question #3 by Bill L.

Are the days of bare metal OS installs numbered? If so, when should we expect to see ALL NEW servers ship with a hypervisor? Will hypervisors have virtual switches in them in the future or will network and storage traffic bypass the hypervisor all together using technologies such as SR-IOV, MR-IOV, VMDirectPath, etc.?

Answer #3 by Rich Brunner, VMware Chief Platform Architect

I do expect that at some point, bare metal hypervisor installs will reach a plateau in the enterprise and service provider environments, but I do not expect that embedded hypervisors will be the only alternative. There has been some industry buzz about PXE boot of hypervisors (this is much more than PXE boot of an installer) and a move toward a truly stateless model. I expect to see more of this; stay tuned. SMB may still want a turn-key solution which either has an installed hypervisor from the Server Manufacturer or an embedded hypervisor.

I do not expect that the network and storage control traffic will ever "bypass" the hypervisor; the hypervisor will always be involved in ensuring QoS, ACLs, and routing for this traffic. Even for SR-IOV, there is a fair amount of control required by the hypervisor to make this work. I can see that the actual data traffic can bypass the hypervisor to reduce CPU overhead provided that the hypervisor has sufficient audit control of this data. VMware and others are working to ensure that in the future for SR-IOV devices.

MR-IOV can be transparent to the hypervisor on a single system instance, but the load balancing is a perfect target for control by a centralized management agent across the multiple system images that share the resource (e.g. blades in a chassis share a high-performance NIC which is load-balanced by the management agent across the blades ).

Comments Locked

42 Comments

View All Comments

  • Candide08 - Thursday, July 22, 2010 - link


    Cisco UCS blade servers have (supposedly) been optimized closely with Intel, VMWare and EMC. The goal is to reduce the IO "performance hit" and the network performance hit - by virtualizing an entire router and virtualizing the IO controller to an EMC SAN.

    Can you please comment on a performance comparison between a"standard" VMWare ESX implementation vs Cisco UCS ?
  • DukeN - Thursday, July 22, 2010 - link

    What is VMWare's official stance regarding MS SQL and Exchange deployments in a virtualized environment - are we able to find detailed documentation on these two platforms being run on VMWare?

    Also, it seems Windows Terminal Services could greatly benefit from VMWare but in the past we hit some performance snags (the only application that did not work out in the hypervisor world).

    Has this improved in recent years, and does VMWare officially endorse such usage?

    Thanks!
  • Stuka87 - Thursday, July 22, 2010 - link

    We run fairly large databases (MS SQL) in VMWare (Enterprise Plus), and over all the performance is decent. But its a big hit over running on straight hardware. We did some test with a Dell R910 with Xeon 7550's, and it showed how much slower the database is in VMWare. The actual percentage varied per test, and we ran up against some limits of VMWare itself when the database got to a certain size. But for our application, it still works out better for us to use VM's than dedicated servers.

    Can't comment on using terminal services from inside a VM though.
  • solgae1784 - Thursday, July 22, 2010 - link

    Have you made sure your storage is configured correctly, and that you are doing an apple-to-apple comparisons? (e.g. number of CPUs, memory, storage, OS architecture, etc.) With databases, storage configuration can make quite a difference. One common mistake made when virtualizing is when you store all your database/log disks into a single RAID'ed disk array (VMFS datastore in VMware terms) and end up saturating the I/O capability of that RAID disk array - notably IOps capability. Make sure to follow best practices outlined by Microsoft, such as storing database and log into separate disk arrays (in this case, separate datastores) and using the correct type of RAID - usually RAID 5 for database and RAID10 for log. And make sure your disk array is capable of handling all the IOps required by your database.

    More memory assigned to VM also helps, since it can reduce the I/O to the disk. Just make sure you configure your SQL correctly to take advantage of it.
  • solgae1784 - Thursday, July 22, 2010 - link

    @DukeN - Some of the bigger databases may or may not have trouble within the virtual, but you need to make sure to follow the best practices guidelines for SQL. With correct configuration, you should be able to achieve closely to native. Just keep in mind that a VM almost always won't be able to outperform a physical machine that is configured the same way (it could, but that is very rare). The question is if the trade-off is worth it - VMs have many advantages over physical like high-availability offered by VMware HA, maintenance flexibility offered by features like VMware VMotion, and hardware independence.
  • DukeN - Friday, July 23, 2010 - link

    Well the really interesting thing is for Exchange 2007, the ESX performance seemed to exceed native for a couple of measurements (for Exchange 2007 anyways).

    I was wondering if there was any new research/guidelines for 2010 given the massive increase in popularity for virtualization between the two releases.

    SQL seems to be a very varying product however, and TS seems to be every more unpredictable.
  • solgae1784 - Friday, July 23, 2010 - link

    With TS, one of the studies published by Virtual Reality Check team indicates that a lot of "small" VMs (i.e. scale out) can essentially perform just as well or even outperform a single big physical server (i.e. scale up) when hosting terminal services. For example, a group of 4 VM servers with 2 vCPU VMs can essentially serve just as much or more users then a single physical server that has 8 CPUs. In general, scaling in approach is NOT encouraged in the virtual environment due to additional overhead generated with multiple CPUs compared to physical counterpart.

    The scale out approach also applies to SQL and Web servers as well, especially web servers because they don't scale up very well: http://vpivot.com/2010/03/22/optimal-web-servers-v...

    If you're worried about licensing, at least Microsoft has changed their licensing terms to be more virtual friendly, so for SQL enterprise for instance, you only pay for the physical sockets of your host server with the ability to host unlimited number of SQL instance VMs. Same thing for Windows Server Datacenter edition as well - you only pay for physical sockets in the host server and you can host unlimited number of any edition (e.g. Standard, Enterprise) of windows server VMs. So licensing savings can be another big incentive to virtualize your systems.
  • spddemon - Friday, July 23, 2010 - link

    Very good point, solgae1784,

    The storage configuration is just as important as the CPU/RAM configs. If your goal is to virtualize large sql databases you are going to need a lot of spindles and either use 10G E or 4GB Fiber (8GB would be better). You also need to make sure you have installed the necessary multipathing drivers or utilities kits to ensure the host and SAN controllers can communicate. Often poor performance is blamed on the physical resources when that just isn't the problem. It is something far more simple like a multipathing conflict, spindle bound, improper VMFS partition alignment, or just a lot of fragmentation.

    with all the technology we have today, many people believe everything is ready out of box technology. Unfortunately, enterprise solutions are rarely the case.

    I really wish VMware would make the CPU wait and I/O Wait performance indicators more easily available. These metrics are valuable for determining choke points!
  • Zibi - Tuesday, July 27, 2010 - link

    Hello everyone

    Sorry for digging this up but I've just stumbled on this and being fresh on the subject I'd like to add my 0.02 $.

    I've been comparing VM to phys in apple2apple for couple of weeks now.
    I'm using 2 IBM x3650 M2 with pair of Xeons E5540 each.
    Our test DB is small (4GB) but queries running on it are very CPU intensive (more OLAP than OLTP).
    First thing - MS SQL Enterprise does not seem to be using 8 vCPU. Results for 4vCPU and 8vCPU VMs are the same.
    Second thing - in our case 4vCPU VM compared to PH throttled to 4 cores is more than 15% slower. From my digging through Perfmon data it looks like the main suspect is amount of ContextSwitches / sec.
    For example VM hit max at 73000 CS/s to 95000 CS/s by physical machine.
    I've performed some tests on our earlier machines and the differences were even bigger.

    Considering that CS/s are even more important in the TS environment I'd not virtualize our Citrix farm without upgrading to the newest machines possible and adding another 10% power just to be sure.
  • duploxxx - Wednesday, July 28, 2010 - link

    what is your storage design/configuration in Vmware and did you use the same storage specs phys vs virt?

Log in

Don't have an account? Sign up now