Click here to Skip to main content
1,795 members
Articles / Virtualization

The Code Project infrastructure: Virtualization basics

Rate me:
Please Sign up or sign in to vote.
4.00/5 (1 vote)
20 Apr 2012CPOL 16.8K   1   1
A continuation of our series on our series on the infrastructure

Virtualization Basics

Everyone has heard the term “Virtualization”. The basic premise for most admins is you take a bunch of servers and run them off a single piece of hardware. What are the benefits of this? Why not just leave everything running on machines, they are so cheap. The reality is the cheaper that hardware gets the more you can save money when virtualizing. With the speed of the new hardware coming out, a good portion of the servers will rarely if ever make use out of even a single core, let alone the multicore CPU’s you have available. Disks are faster and faster, RAM is faster, CPU’s are bigger and faster.

I would equate this owning a big fleet of school buses and only picking up one kid at a time. It’s time to pull that big bad yellow diesel sucker over to the side of the road, open the doors and take everyone for a ride. When you do that you end up with a bunch of spare School buses that you can play around with… Oh I dunno, throw on some monster truck tires, a supercharger, nitrous tanks and have it shoot flames….In computer terms you will hopefully end up with a bunch of unneeded server’s doing not much more than holding up that wobbly table in the lunch room…can anyone say Quake Server?

In this article I will hope to cover the basic terms and concepts so we are all on the same track. Then walk you through the process we took and the deployment we have running here at The Code Project. There are many terms that I won’t include here as the concepts behind them can warrant a full article on their own. The following should be enough to get you up and running.

The Basic Terminology

Virtual Host
This is a reference to the physical server that is running everything for you. This is generally a big beefy multi core machine with lots of memory and access to oodles of disk space.
Virtual Guest
This is the actual OS install, this would have previously been a single installation of Windows XP, Windows 2008 R2, Ubuntu, RedHat etc… on a single physical machine. This is often referred to as a VM, or a Virtual Machine
This is the software that runs on the Virtual Host that allows you to install more than one Virtual Guest on that machine. EG VMWare Workstation is a software package installed on an OS already running like Ubuntu. This lets you run For example windows from within your Ubuntu install.
Bare Metal Hypervisor
This is essentially the same as above Hypervisor term and the terms are often interchanged, however this specific term would mean that the Hypervisor includes with it an actual customized Operating System along with the software. VMWare ESXi, Citrix XenServer and Microsoft HyperV are 3 such examples of Hypervisors. ESX and Citrix run off of a customized Linux Kernel and HyperV runs off of a customized Windows Server install. I typically just call it a Hypervisor.
Hardware Assist
AMD and Intel both have their own versions of this. AMD-V and Intel VT. You will need to enable this in your BIOS before installing your Hypervisor. Some Hypervisors will not work without some method of Hardware assist in place. The Motherboard and CPU will determine if Hardware assist is available.
Virtual Disk
You can think of a Virtual disk as you would a hard drive on a normal computer or server. You would create a Virtual Disk on the Virtual Host and then assign it to a Virtual Guest. Then you would create your partitions on your virtual Host (eg. your C: drive). This Virtual Disk is typically setup by the Hypervisor as a single file on local or shared storage that is accessible on the Virtual Host. Another way to think of it is if you took your entire C drive and put it in a Zip file. This is what it would look like on a virtual host. Just a single large file (Typically it would include the white space, however different Hypervisors can handle dynamic Virtual disks that grow in size as your Virtual Host consumes space) There are pluses and minuses to this approach.
Similar to a Virtual Disk, you need to assign vCPU’s to your Virtual Hosts. These can be mapped on a 1 to 1 basis to actual physical cores or they can be shared. You can even over allocate the vCPU’s as compared to the actual physical cores. Again pluses and minus to this approach.
If you’re following along you probably can guess at this. This is a software based Switch or Network that you create on your Virtual Host so that your Virtual Hosts can talk to each other and get out to the world. These are typically mapped to a physical network card. Now depending on what Hypervisor you use this is deployed differently. I’ll go more into this in a later article that goes into the details of an actual deployment.
A snapshot is like your own personal Time Travelling Phone booth fresh out of Bill and Ted’s Excellent Adventure. All Hypervisors will allow you to take a Snapshot. A Snapshot is just what it seems. It’s a point in time marker that you can revert to. Eg. you have your production web server in place, take a snap shot, you install a software update. You realize the update was bad and faster than you can pull out your air guitar and say “Excellent” you can revert back to the point in time when you created that snap shot. In the different hypervisors these snapshots are used for other things like backups and high availability.
This is a conversion process to convert a Physical machine to a Virtual Machine. I don’t particularly like to use this unless it’s a last resort because often the conversion fails, and I just would rather have a nice clean install. However there are times when this can save a whole lot of work.
Virtual Appliance
You will see companies make these available for testing or purchase. More common amongst the Linux community, it is a preconfigured OS and software installation. You just install the Virtual appliance and you’re off and running with whatever piece of Software and OS the vendor has chosen.

Well that should get you up and running as far as terminology goes. It’s time to go into a few of the more common deployment strategies.

The first is the Desktop or Local Deployment. This is particularly handy if you have a big fast desktop machine and you want to run a different or more than one Operation system on your machine without affecting your installation. For example if you are a developer and you want to have a web server and a SQL server at your fingertips but you don’t want to use or don’t have access to a separate machine for this you can install a Hypervisor on your machine and run multiple other Virtual Hosts. When combined with Snapshots this is a very powerful option for Developers. Sys Admins can also benefit from this by being able to try out different OS’s without having the need for messing up your machine. I would tend to use this to run Windows or demo software I am not entirely confident it’s safe.

Another common Use of Virtualization is to actually virtualize desktops for people. I have used this in the past for people that are mobile and need access to local resources or software and we don’t want to trust their home PC’s or laptops to our network. Dev’s will sometimes use this as the software is fairly pricey and it can then be shared amongst many people. However I have typically seen this in the first instance where remote people need a desktop.

However the reason we are here is to Virtualize Servers. I will get into this now. I will lump these into 3 categories or types of deployments; 1 to 1, Local and Shared.

1 to 1
This would be when you virtualize a physical machine and it’s the only Guest on the host. Now why would you do this? Perhaps you have an older slower physical machine but you want to take advantage of the snapshots, portability and ease of backing up a full machine. I have done this a number of times for servers that are very complicated to setup with many software bits that I would dread to reinstall given a hardware failure.
This is when you have a good amount of local Storage, CPU and Memory on a single physical machine and you run a number of Virtual Hosts on it. How many hosts you can run on a single machine is dependent on your hardware and the needs of your Virtual Hosts. This is typically the first step in virtualizing your environment. There are a number of calculators out there but as long as your Servers are not particularly high in CPU usage, you can typically run 1 Virtual Guest for each physical CPU Core you have on your Virtual Host. You should have enough ram to allocate that on a 1 to 1 basis for each of Guests you have. Of course depending on the hardware you currently have you may notice performance issues at this ratio.
This is what you do when you have access to some sort of shared storage like a SAN and you have more than Physical Virtual Host. This deployment opens up a whole new level of possibilities with High availability. Most Hypervisors will allow you to live migrate a running virtual guest from one virtual host to another with little or no perceived downtime. Although more expensive than the first, once you get into running more than a single Virtual Host you can save money by not having to have extra disks for overhead.

At the Code Project we started off with a single local installation of HyperV. This quickly got nuked as we had a number of things we couldn’t get working the way we wanted to. Windows Network Load Balancing in particular we had a very difficult time getting it to work with the other webservers that were not virtualized. We then moved everything over to ESXi. This single machine we were using was a dual six core AMD with 28 GB’s of RAM.

We had initially planned to run one of each of our webserver clusters off of it, as well as some of our other servers like DNS. While our LakeQuincy web servers seemed happy to be virtualized we encountered performance issues with our CodeProject webservers. Although equal in specs to a physical server, we just couldn’t get the performance out of them. We tried a number of different things to try and narrow down where we had problems, and even tried as setup that had 2 machines both identical: one we installed as a normal Server with Windows installed directly on it, and one with ESXi and then windows running on it. The Virtualized cousin served pages about 20-30 percent slower based on equal load. We tried this under ESXi and XenServer and both were similar in terms of performance. Given enough time I’m sure we could have found the issues, but as we continued on our virtualization path we ended up with a number of spare physical machines that made good webservers, so we didn’t bother pursuing the performance issues further.

It was about this time where we bought a SAN for our SQL servers so that we could cluster them. This freed up one of our SQL servers as we were able to consolidate the SQL databases onto the newer hardware. We used that old SQL server (quad four core with 32 GB of RAM). This gave us 16 cores and 32 GB of RAM for more virtualization. Woo Hoo! I set this one up under Xenserver with the primary reason being that it offered the ability to run 8 vCPU’s under the free version versus just 4 vCPU’s under the free version of ESXi. As well, I liked the free offerings for shared setups better under Xenserver than VMWare.

We once again tried the Code Project webservers and were still unable to get the performance we wanted even with 8 vCPU’s. During this transition we made a number of Hard disk swaps trying to get all the disks we could onto the SAN. We settled for 12x148GB 15k disks (raid 10) to run the SQL cluster, and 8x10k 300GB disks (Raid 5 mostly to maximize space) to house the Virtual Machines.

For those interested, the SAN we bought was an HP P2000. The HP P2000 has 24x2.5” slots with eight 6Gbps external SAS ports on two controllers. This has proved to be a powerful setup for us and has been very reliable. We have had a number of hard drives fail, but I attribute this to us not buying a single new disk. All disks in use were all pulled from service on other machines – including servers that had failed due to overheating at our previous hosting centre.

We connected the newest VM to the SAN and created the first node in the VMPool. I then converted all of the existing Virtual Hosts from ESXi to Xenserver. Once they were all created I rebuilt the original Virtual server with Xenserver and connected it to the SAN. I then moved a few of the VM’s around to balance the load. Over the last few months we have added more and more VM’s and we are pretty much at capacity both in terms of available RAM and Disk space on the SAN.

The next few articles will cover the actual setup of Xenserver, connecting it to a SAN, and creating Virtual Hosts.


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Comments and Discussions

GeneralMy vote of 4 Pin
ernestmachado16-Apr-12 20:13
ernestmachado16-Apr-12 20:13 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.