The school I work at is 57 years old this year. This means that over the course of the life of the school buildings, lots and lots of cables have been installed for various systems. Discounting electrical cable, there are cables for the computer network, the burglar alarm system, the classroom and corridor intercom system, the old analogue and later PABX phone systems and even some analogue CCTV cables laying around somewhere in the roof.
After a recent venue reshuffle, my colleague and I have had the fairly rare chance to really go wild and rip out as much of the useless cable as we can find in the areas that were reshuffled. We decided not to just cut off the cable at the most convenient spot, but to follow it as much as is practically possible all the way back to its source. This method takes a lot more time but is a much more thorough cleansing than if we just snipped when it vanished from view.
As we’ve followed the cables and opened trunking, it’s become abundantly clear that cable was never ever removed in the past. The phone system in particular is an example. From what I can figure out and from talking to long time members of staff, it appears we once had an analogue phone system from our national phone company. There were no internal extensions, only direct lines all over the show. This meant that the phone company ran a lot of thick multi core cabling all over the show before using one or two pairs for the end jacks. We’ve discovered that many of these thick multi core cables simply taper off to a dead end point.
When the school got a Samsung analogue/digital PABX, the telecoms company that installed it simply ran their floor cables all over the show without removing the older cables first. Hundreds of meters of cable were run to support the Samsung PABX. Often cables were glued on the walls, in door frames or wherever else, often leading to a very messy appearance.
Two years ago, we moved to an Avaya VoIP system, which runs on the existing network cables. Although the telecoms company responsible for that system did rip out quite a bit, they didn’t bother going after the floor cables or anything like that. So for the last 2 years, we’ve been sitting with a lot of dead cable all over the show. When we eventually get to ripping it out, you end up with a pile like this:
That whole pile is either multi core cable from the original phone system, or from the Samsung PABX. We had about 2 other piles of similar size when we cleaned up other venues, though those did include network cables and some power cable as well.
With the removal of so much cable, it becomes possible to install smaller and more compact trunking which not only looks neater, it also leads to easier cable management. Getting cables to stay in place while you hammer the cover back on of a 100mmx40mm piece of trunking is not an easy feat.
When I head back to work after my break, we’ll no doubt continue the hunt for cables and rip out as much of the dead stuff as we can find. I just wish it was easier to recycle these cables than it currently is.
Generally speaking, the Windows Update mechanism usually just works. Updates are downloaded and installed usually without much fuss. In the home environment, it’s pretty rare that things will go wrong, since computers at home are less likely to come under heavy use and abuse. Most of the time at work I have no issues with Windows Update, despite the pounding the computers take in the school environment.
Sometimes however Windows Update gets broken. Malware infection, powering a computer off during the update process and hard disk corruption are some of the most likely culprits. I’ve found myself fixing a few computers in the last week at work that have developed faulty update mechanisms.
To fix the problem, there’s two tools I’ve used. System File Check is built into Windows, while the System Update Readiness (SUR) tool can be downloaded from Microsoft’s website. The first port of call is to simply run sfc /scannow from an elevated command prompt and let it scan the system. I’ve found this to fix some problems, and it serves as a good stepping stone for step 2.
Step 2 involves running the SUR tool. The SUR tool looks like a stand alone Windows Update, though it is actually scanning your computer’s Component Base Store for corruption and either fixing the issues, or logging the issues it can’t fix into a very useful log file. Depending on the speed of your computer and the number of faults, the process could take up to 20 minutes to complete.
If the computer still refuses to install updates after step 2, it’s time to check the SUR log to find out exactly what is wrong. Navigate to C:\Windows\Logs\CBS\CheckSUR.log and find out exactly what files or packages are causing the problem.
In the case of the computers at work, all of them were missing certain manifest files out of the C:\Windows\WinSxS directory. To fix the issue, I copied the same manifest files from a working PC and placed them into the C:\Windows\Temp\CheckSur\WinSxS\Manifests folder and reran the SUR tool. Checking the log file after the tool had run indicated that all the remaining problems had been fixed. After that, the problematic updates installed without issue.
Research on the internet indicates that things can be a whole lot more corrupt that what I experienced, but thankfully I had it easy. There’s a nice long article on the SUR tool as well as how to analyse the logs on here.
Although it is frustrating to deal with Windows Update issues, the mechanism is largely robust enough that with a little time and effort, you can fix just about any problem. Certainly a far cry from Windows Update errors and troubleshooting in Windows XP!
With the seemingly slow decline of optical drives in computers, it’s becoming more and more common to install the OS via a bootable USB flash drive. I outlined a method of doing so using built in Windows tools way back in 2010. However, that method is little tedious and doesn’t make the flash drive capable of an UEFI based install, only legacy BIOS.
Enter a better way of doing things: Rufus.
With an easy to use graphical interface, you can select all the options you’ll need to make a bootable flash drive. In particular, under “Partition scheme and target system type” you can select GPT as the partition type for an UEFI based install. At work, our brand new server doesn’t have a DVD drive, so this was the only way to install Windows Server onto the server in UEFI mode. No other tool could do that.
Make sure you have an ISO image of the disk you want to put onto the flash drive – Rufus doesn’t do a live capture from a physical disk unfortunately. You can even make a bootable MS-DOS based flash drive if you have the MS-DOS files, useful if you need to be able to flash an older computer’s BIOS or RAID card for example.
Add Rufus to the list of essential tools any administrator or technician should have in their toolkit.
My last two posts have spoken about my former XenServer and the fact that I replaced XenServer with Hyper-V. What I didn’t explain was how to actually migrate the exiting virtual machines off XenServer and into Hyper-V.
If you are lucky enough to have both XenServer and Hyper-V servers running concurrently, the process goes a lot quicker than the way I had to do the task. In my case I had to migrate VM’s off the server, install and configure Server 2012 R2, install Hyper-V and then set up my VM’s. Not a difficult task, just time consuming. With two servers, you can move the exported VM hard drive directly to Hyper-V and get going a lot quicker. Moving 120+GB VHD files does take some time, even on gigabit Ethernet.
On XenServer, I had 5 VM’s. I determined that I would be migrating 3 of them, while I would rebuild the other 2 from scratch once Hyper-V was up and running. I searched the net for info on how to do a migration of the existing VM’s, but there wasn’t a lot of info out there. I certainly couldn’t find any tools to do the job automatically, which is a pity as there is plenty of tools that will convert from Hyper-V to VMware and vice versa, or do a Physical to Virtual (P2V) migration.
I did come across info that said that once I had exported the VM, the VM would be able to boot in Hyper-V without problems. I tried exporting a VM directly from XenCenter, but it wouldn’t export VHD files. Exporting an OVF file and then converting that was possible, but it would be even more time consuming while waiting for the conversion.
Eventually the solution I found was to use Microsoft’s Disk2vhd program. It’s a small program that will snapshot the VM inside XenServer and push it to a VHD or VHDX file, which is the hard drive format used by Hyper-V. The program will create one file per physical drive, not per partition. So a VM with a hard drive with 3 partitions on it will actually create only 1 file.
Store the VHD file anywhere except on the machine being captured. Depending on the size of the drives, the process could take a few minutes to a few hours. Having a gigabit network helps in this regard, as the program will max out your network connection if you save the file via the network.
If you are capturing a server running Exchange or SQL or acting as a TMG firewall, stop all those services before capturing so that you don’t have any inconsistencies after the machine is booted up again.
Once you have your VHD file, this can be copied onto your Hyper-V server’s local storage or cluster storage or whatever you are using. Set up your VM in Hyper-V using Generation 1 hardware (unless you captured Windows 8/Sever 2012) and boot up the VHD file. Hyper-V will boot the system up and once you reach the desktop, Windows will install some drivers. Before you reboot, go to Programs and Features and remove all the XenServer tools and drivers. Reboot the VM, then install the Hyper-V tools, reboot etc.
By the time you are done, the VM should be running stably inside Hyper-V. You will need to reactivate your copy of Windows due to the massive hardware changes, as well as set up any static IP addresses again, as the virtual network card from XenServer is obviously not carried over to Hyper-V. Watch out for your anti-virus product (if you had one installed previously.) After I migrated my Exchange Server over, the VM started locking up and rebooting. Turns out that the specific version of NOD32 that was on the server was conflicting with the Hyper-V tools.
Speaking of my Exchange server, that particular system has gone from being on Pentium 4 class server into XenServer, through 3-4 XenServer OS upgrades and now into Hyper-V. Pretty amazing when you think about how it’s managed to survive all these jumps. Even more surprising when you consider it’s running on Server 2008 (not R2) which is basically the Vista code base.
I hope this helps someone out there who is contemplating making the move themselves, or is simply doing research. Always helps to have someone experiment before you and take those nervous steps first.
In my last post I mentioned that I did a server migration, and that the hub around which everything turned was the server running Citrix XenServer. As also mentioned, I replaced XenServer with Microsoft’s Hyper-V. Let me explain some of my thinking around why I made that choice.
Xen has been around since 2003, making it a pretty mature hypervisor. When Citrix got involved, the project only grew and became more powerful. In fact, it often seemed like XenServer was the only competition to VMware. When we decided to make use of virtualization at the school, we had the choice of VMware, XenServer and the original version of Hyper-V. VMware was fussy on the hardware, Hyper-V didn’t have decent management tools or Linux at the time, all while XenServer just worked out the box.
Fast forward a couple of years though and the scene has changed quite a bit. Hyper-V has made huge strides and is fighting it out with VMware for top dog in the virtualization world. Linux KVM has come along in leaps and bounds and is pretty popular for running Linux VM’s. XenServer unfortunately began to look like a deer in headlights, not knowing where its place is.
The free edition of XenServer was a good product, but it often felt like Citrix was keeping back that one or two juicy features that would just make it so much more excellent. A business generally wouldn’t be adverse to purchasing a more advanced version, but it’s not so easy in a school where resources are often a lot tighter.
I decided to move the server over to Hyper-V because a) it comes free with Server 2012 R2 and b) all of the servers I was virtualizing are Windows servers. It just seemed to make better sense to me to run Windows servers under the platform made by the same people who make Windows. There were other benefits as well, namely that I could run Windows applications for setting up or monitoring the RAID array, easier network management and better support of Windows guests. Upgrading the Xen integrations tools after an OS update was always a fingers crossed moment, hoping that it wouldn’t affect the OS. Sometimes the tool updates ran extremely slowly, something which was always frustrating.
With XenServer, it often took a long time for later versions of Windows to be officially supported. Server 2008 R2 for example would be considered experimental for a release and then upgraded to official support in the next release. However, the gap between major releases of XenServer could be a year or more. With Hyper-V, all Microsoft have to do is release an update of their integration tools to support a new version of Windows – the whole OS doesn’t need to be upgraded.
Since migrating, Hyper-V has behaved itself. My Exchange server initially didn’t like the migration, tending to lock up at random intervals, while all the other servers behaved themselves. Turns out that the Hyper-V tools conflicted with the Eset NOD32 version I had installed on the server, which is a very old version to be honest. Removing NOD32 solved that freezing problem, and now all servers are behaving themselves nicely. Best of all, the management tools are all nicely built into Windows, or are a simple download away. The overall server gets managed with Server Manager, while Hyper-V gets managed using its own console. XenCenter was a great management console, but it felt like precious little had changed between versions over the years.
To Citrix, I say thanks for giving away XenServer all the years. There were quirks and minor issues, but XenServer pretty much worked as promised. Good luck with the transition back to a more open source model, I hope it helps to keep Xen relevant in the cutting edge market of virtualization.
About a month ago I did a fairly large server migration project at our school. In addition to installing a brand new server, I also did a lot of logical migrations of virtual servers or server roles. The main hub around which my plans revolved was our generic Intel server.
Originally purchased and installed in March/April 2011, the server has been humming away quietly for the last 3 years, doing what it was meant for – virtualization. It hasn’t been all smooth sailing however, as their have been some quirks. The original 16GB SSD died one day, and by died I mean a complete and utter death – not even visible in the BIOS. That took XenServer along with it, giving me a miserable 2 days getting the server back up to scratch. Funny enough, the mechanical 160GB I used to run XenServer after the crash was still running fine up to when I removed it and swapped it for Microsoft Hyper-V.
Another issue that came up early in the machine’s life was XenServer hanging at random intervals, usually at night when the network was quiet. No info in any logs, just a random hard lock up that only a power cycle could clear. My ex colleague and I eventually discovered that it was due to XenServer not liking the lower power C states on the then new Intel “Nehalem” Xeon chips. Disabling the lower C states in the BIOS solved that issue and XenServer then chugged away for the rest of its life.
This server has never had its firmware updated, so when I did the server migration I took the opportunity to flash the latest firmware available. This would also help in getting the server ready for Server 2012 R2 and Hyper-V. I prepared my flash drive with the update, no problem there. Updating modern Intel servers is pretty easy, so I didn’t expect any issues. Of course, that’s when the issues started…
During the update, the update was unable to determine the chassis type, which I found odd. Still, I manually entered the model when given a choice. A few seconds later, all the fans in the chassis ramped up to full speed and stayed there, as well as the system health LED blinking an ominous amber. Prior to this, the server was near whisper quiet, only ramping up fans during POST. I ran the firmware update again, thinking maybe something hadn’t stuck. No joy, system was still sounding like a jet engine. Although the server was perfectly usable, I couldn’t imagine the system running like that 24/7 in the server room. That noise level would drive anyone mad after a while! I left defeated for that night and decided to hit it head on the next morning. I did some research when I got home and based on what info I could discover using the server’s built in event log, I was able to narrow down where the potential problem was – fans not being installed on the correct headers for that specific chassis.
The next morning, my colleague and I took the server out the rack to look at the internals. Along with some guides from Intel’s web site, we found that the issue causing the jet like noise was what I suspected from my research. When our hardware IT provider built the server, he didn’t connect the AUX cable from the power supply to the motherboard, as well as the fact that 2 of the 3 on-board fans were plugged into the wrong fan headers. On a desktop PC this wouldn’t be an issue, but server systems are picky about what is present and what is missing. If the firmware expects a fan to be plugged into header X and if it’s not there it’s going to make a fuss.
After sorting out the cabling the server went back to being whisper quiet after the next boot. The system health LED went back to green, the way it’s meant to be. Just for safety, I ran the firmware update again, which worked perfectly this time. The chassis was detected correctly and the system behaved normally after the update. With the hardware issues resolved, we moved onto installing Windows and getting Hyper-V up and running.
Since the migration, the server is behaving well. The server is showing its age a little, as the Nehalem chips are quite old now. Surprisingly, the system is still doing ok with 48GB RAM, though we may bump that up to 64GB to enable the system to host more VM’s. In the end, it just goes to show that it’s sometimes the smallest of things that give us the most grief.
Here in South Africa, one doesn’t have too many options when it comes to TV channels. The public broadcaster has 3 free to air channels, while a fourth free to air, e-tv, is a private business. On the pay TV side of things, there is either DStv, or StarSat (previously known as Top TV,) both of which are satellite broadcasters.
We’ve had DStv since 2008, when we purchased at that time, the top of the line SD PVR decoder. The device could display 2 different TV channels at the same time, while also recording a 3rd channel in the background. The resolution was standard definition, which wasn’t a problem when all the TV’s in the house were small CRT based things. However, since I got the large TV in the lounge a few years ago, putting up with SD quality on that screen has been slowly driving me nuts. Throw in the at times instability of the PVR and I found myself itching to upgrade.
DStv introduced some HD decoders a few years back, but apart from one device that offered the same features as the SD decoder, they were limited to 1 view, 1 record. Throw in the fact that these decoders were even more unstable and I decided to wait a little longer.
Last weekend, I finally ended up purchasing the new DStv Explora. The Explora is a new and modern HD decoder, although still sadly limited to 1 view, 1 record. The interface on the decoder is a lot more modern than any other decoder DStv has ever produced, and it has a 2TB hard drive inside, which ensures much more space for recordings. With the SD decoder I found myself often butting up against the recording limit.
The Explora is securely packaged in the box, wrapped in a nice layer of bubble wrap. The device isn’t too heavy, but feels solidly built despite being mainly plastic. There were no creaks or other defects out the box. Unfortunately for whatever reason, the power supply has now migrated from being internal to being a power brick. I suppose it makes sense that if there is a power surge or something, it’s much easier to replace a power brick than the whole decoder. Still, power bricks are often unsightly and contribute to cabling clutter.
The old SD decoder is quite noisy, with a very distinct fan drone emanating from the machine at all times. The Explora is a lot quiter, and seems to run cooler as well, despite it’s vastly upgraded internals. Hard drive noise is also far less evident, thanks to modern drives which are a lot quieter than the 250GB model in the SD decoder.
I chose to install the Explora myself, without making use of an installer. There was no need to pay someone to do the job, since we already have a large enough dish and have a twin cable feed coming in from the dish. From there, the process is simple:
- Screw cables from the dish into the top inputs on the included multi-switch.
- Connect one output cable on the side of the multi-switch to the Explora.
- Connect two cables from the bottom of the multi-switch into the inputs of the existing SD decoder.
- Use a F connector splitter to split the feed from the RF output of the SD decoder. One cable goes to the RF input port of the Explora, the other cable runs to the secondary TV that was always hooked up.
- Use HDMI cable to hook up Explora to my amp, which in turn feeds the TV.
The reason to interconnect the 2 decoders is to enable DStv’s Extraview feature. With this feature enabled, you are able to use 2 interlinked decoders on the same subscription for a nominal amount every month. With my particular setup, we can theoretically watch 3 completely separate TV channels, whilst recording 2 different programs at once.
The installation really isn’t difficult if you already have a previous DStv in your house and it meets the requirements for the Explora. The rest is just an exercise in patience as you connect multiple cables. Depending if you are making use of Extraview to interlink 2 decoders or not, you may need to purchase 3 extra co-axial cables and a F connector splitter.
So far, so good. The Explora has been running a week with no problems that I’ve detected. Most of the channels are still SD resolution, but they are being upscaled better than the old SD decoder could ever do. HD content on the other hand looks lovely, if not quite Blu-ray lovely. Still makes a huge difference in things like live sport though.
Overall, the Explora is a worthwhile upgrade. From any SD decoder it’s a big leap, while the increased space and stability puts it above the older HD decoders. Time will ultimately tell how stable the Explora will be, but I am strangely optimistic the device will hold up well over the coming years. Although the device is quite pricey, it has been on special a few times already.