Archive

Archive for April, 2011

Citrix XenConvert

During the course of our recent migration of some servers from physical to virtual, we had the need to migrate our existing Microsoft Exchange server. As I’ve mentioned before, this was an ancient Pentium D era box that was really struggling. In order to migrate, Citrix offers a free tool called XenConvert that can migrate most Windows based computers to a virtual machine on the fly.

Before moving the Exchange server, we had to have a trial run. I installed XenConvert on one of the staff room computers, a nice Core i3 computer with 4GB of ram. XenConvert installs easily enough, and the process to migrate the host isn’t difficult. It’s really a case of a couple of clicks combined with some thought on how you want the hard drive sized. With all of those choices done, I clicked continue and waited for the process to finish. The host was only running a 100mb/s network, but the overall hard drive size was quite small.

That being said, the process took hours. XenConvert needs a working area to store its temporary file, which in our case was a mapped drive off our main server.  After the process is done, XenConvert then feeds that file back through the host and into your XenServer. Again, this process takes hours, with the network card being pretty much maxed out. Despite all the waiting, the host converted successfully, which gave us confidence to tackle the Exchange server. That’s when the nightmare began.

On the last day of the first term, we shut down the Exchange services mid afternoon so that no one could access their mail and interrupt the process. All the relevant boxes were ticked, temporary drive mapped and so forth. It was then just a case of waiting. Unfortunately, the process was so excruciatingly slow that it made the Core i3 conversion seem like a speed demon. Granted, we were on a much slower box with more space used, but we did have gigabit ethernet to make up for that. No such luck. The process ran from about 14:30 through till 23:30 before it started the import procedure. Of course, by this time my colleague and I were long home, so monitoring the procedure was not simple. A few hours later, the server crashed due to the extremely high load XenConvert was placing on it during the import process. While nothing was damaged, the downside to this was that the Exchange services had restarted, allowing new mail in.

With heavy hearts, we had to start the procedure again. We had promised that no mail would be lost, so our hands were tied. I restarted the procedure again over that weekend, but this time I chose not to import the host into Xen automatically after conversion. We would rather do that ourselves while we were on site. This second attempt failed at some point to successfully convert for some reason.

For our third attempt, we decided to create an OVF package rather than have any import take place, as we could import the OVF package into Xen. Once again, the procedure took hours, but this time it finally succeeded. With the massive package sitting on our main server, my colleague started importing the OVF package into Xen. It was a harrowing wait, as the procedure carried a warning that it was experimental code! About 4 hours after he started, I remoted into work and breathed a sigh of relief that the procedure was done. My colleague had shut down the physical Exchange server already, so with a deep breath I started up the VM.

Much to my relief, it worked. From there it was just a matter of letting the VM pick up new drivers, install the Xen tools for proper storage and network card drivers and reactivate Windows. After all that and numerous reboots, I left the server to settle over night. It ran beautifully, and has continued to do so for almost 3 weeks now.

The overall impression I took away from this experience is that doing a physical to virtual migration on any host is a delicate time consuming procedure. The more information you have on the host, the longer it’s going to take. It’s also going to be problematic on older hardware that cannot handle the sustained high load when doing a straight import into Xen. Definitely a learning experience, but not something I want to do too soon again.

South Africa becoming awash in bandwidth

In the past, South Africa’s internet connectivity was limited to the few undersea cables we had landing on our shores: SAT2, SAT3 and SAFE. As time went on, newer cables were discussed but little seemed to actually be happening. The people behind the SEACOM cable were the first to get the ball rolling, getting their cable up and running during 2009. They brought massive capacity and price drops to the tables. Other cables had been planned and spoken of, but SEACOM was the first to arrive.

Being first, SEACOM has had the most impact to date. It took a while to filter down, but suddenly “uncapped” ADSL started to show up all over the show. Price wars started and overall costs per GB of bandwidth fell. Our school would be a perfect example of this: we went from 512kb/s uncapped to 1mb/s, then 2mb/s and now 3mb/s for less than what the 512kb/s originally cost. The only downside to SEACOM is that they have had a number of outages and reliability issues, which makes things a bit painful when such an event occurs.

In the course of last year, the EASSY cable landed as well, bringing more capacity yet again. However, this cable has not quite brought about the same revolution as SEACOM, as I am not aware of any ISP’s making use of the cable as yet in South Africa. The old SAT3 cable also got upgraded, but prices are still very high on this cable in comparison.

This past week, the WACS cable landed in South Africa, promising to bring an absolute glut of bandwidth to the country. Though the cable has landed, it will only go into operation next year sometime. Here is just one of many articles about the landing: http://mybroadband.co.za/news/telecoms/19792-WACS-lands-South-Africa.html

From what I’ve read, there are more cables on the way as well, not to mention other cables being planned. While this is fantastic in of itself, the problem now lies in the telecoms network in South Africa itself. Fibre is being laid at an astonishing rate, but it’s never enough it seems. Business customers are reaping the rewards of all this growth, but home users are still stuck thanks to Telkom.

We are in a much better situation than we’ve ever been before, but we still have a way to go until we can truly take advantage of all the bandwidth available to us. Things like video conferencing, video streaming, cloud services and more will only make an impact once broadband truly becomes cheap, reliable and easily accessible broadband.

Run WSUS with a SQL Server

We’ve been using Microsoft’s free WSUS solution for almost 2 years at work now. It provides a wonderful way of updating all our computers whilst only needing to download updates once. In the past, we’ve run it off our main server, which also happened to be our domain controller, DNS, DHCP, file server and so on. At the time, I set up WSUS to use the Windows Internal database for its database as we were in a hurry and we didn’t want to tax that server with SQL Server on top of everything. The server is more than capable actually, but is constrained by the fact that it only has 5GB of RAM.

While the configuration worked, one thing became clear quite soon: the WSUS console was painfully slow to use locally or remotely. Updates to the client computers ran very well and at full speed, as these were simply being pushed from a file share over HTTP. The WSUS console relies on a database to keep track of all the updates, client computers, status and information of updates etc, and was being severely hampered by the sluggish Windows Internal Database.

When we set up our virtual server a few weeks ago, one of the first servers we set up was a new Windows 2008 R2 server to serve as both a WSUS and NOD32 anti-virus server. Having decent resources, I installed Microsoft’s free SQL Server 2008 R2 Express Server onto the new server. The Express edition may lack many features compared to the full blown SQL Server, but it’s free and is absolutely perfect for the job of handling WSUS is a small to medium sized environment.

When we set up WSUS on the new server, I noticed a big performance improvement in the console locally and on my computer for remote administration. The WSUS database itself is currently about 1.2GB in size, which is pretty big I guess. The actual size of all the updates is now close to 29GB in size. Luckily, we were smart and first mirrored our old WSUS server so that we could reuse all the updates without needing to download them again from scratch.

In short, I highly recommend using WSUS in combination with SQL Server for vastly improved performance in the management console. Apart from the fact that it is an extra program to install on your server, I cannot think of a reason why you would want to run WSUS with the Windows Internal Database when you have SQL Server Express available for free. The performance increases make it worthwhile in every way.

Adventures in virtualization

A few weeks back our school took ownership of a pretty powerful new server. We had budgeted for it last year, with the intent of getting it in the first term this year. After some issues with our supplier struggling to get all the quoted parts, we eventually got the machine fully working. It consists of 2xIntel Xeon 2.4GHz chips, 48GB Ram and 12TB of storage space on a dedicated Intel RAID controller. All of this is mounted in an Intel Pilot Point Chassis and a rack mount kit was purchased to complete the conversion.

The idea behind getting such a big server was to virtualize a number of secondary servers we have running for services such as FOG, BackupPC and FreeNAS, while also hosting our Exchange 2007 server. The existing Exchange server is a Pentium D era piece of kit, and while it has run very stably over the last 15 months, performance is terrible. The server takes forever to shutdown or restart, and its limited RAM hampers performance when lots of clients are connected at the same time.

We were left with 3 software choices for the host system: Citrix XenServer, VMWare ESXi and Microsoft’s Hyper-V implementation. All 3 are free, but differ in performance, supported guest operating systems as well as management capabilities. XenServer has a wonderful management console, but is pretty limited in which guest OS’s it will fully support. While you can run just about anything, it’s not guaranteed that the XenServer tools will run on the system or how good performance will be without a Xen aware kernel. My colleague and I spent two or three days fighting to get X running on a Debian guest to no avail. We had similar issues with Ubuntu as well. Most people would not run X on a server anyway, but in our case we need it for one application to configure internet sharing.

Hyper-V looked like a really interesting product, but we felt that the management tools were less useful in comparison to XenServer. I know that you can get far better management tools from Microsoft, but you need to purchase these tools, something our school doesn’t have the budget for. Since it is not part of the Microsoft School’s Agreement, we can’t get hold of the software that way either. That being said, I still want to look into this system further at some point and perhaps get a trial of the advanced management tools that are available on Microsoft’s website.

Last but not least, we looked into VMWare. It offers the widest range of supported guest OS’s by far and has plenty of other compelling features. That being said, I found the management console in Windows to be a slow affair, lagging when switching tabs. There’s also some limit regarding storage space in the product where you can’t have a local datastore larger than 2TB in size. I’m sure there are ways to work around this, but we were running out of time to experiment. We also decided that we did not want to end up breaking out 9TB RAID array just for VMWare. I know VMWare is a great product, but I feel it comes with a pretty steep learning curve for newcomers.

In the end, we went back to XenServer, having made the decision to switch to CentOS for running most of our Linux needs. Ubuntu is currently in experimental status in XenServer and I hope in time that it becomes a supported platform.

On Monday the server will be moved into our rack and additional VM’s will populate the new server. I’m excited to say the least, as it’s going to be an interesting learning curve managing so many VM’s.