Archive

Archive for the ‘Computer Hardware’ Category

Salvaging old equipment a.k.a dumpster diving

Last week I watched a couple of videos on YouTube where old computers were rescued from the kerb or dumpsite and refurbished for use. This saves on e-waste and also provides cheap computers to those who cannot afford a new machine. This got me thinking about all the equipment I have discarded, sold or donated while at my school, as well as the actual value of refurbishing old equipment.

As time has gone on, I estimate I’ve gotten rid of over 100 old computers, ± 40 projectors, ± 20 printers and countless individual parts such as dead hard drives, power supplies, motherboards etc. Some of this went into the the trash, while others were donated or sold off to raise some funds for the school. In fact, we cleaned up 6 computers for sale over the first week of holidays. However, the process is time consuming, especially with old equipment like that. The process goes something like this:

  • Physically inspect the chassis to look for loose panels, missing screws, worn/sticky buttons etc.
  • Open the chassis and blow out all the dust using our air compressor, then perform a visual inspection of the motherboard, looking for swollen/blown capacitors, loose cable connections etc.
  • Power on the PC and listen for fans that need lubrication. Most often this is the power supply, graphics card or the chassis fan. Fans that grind are a sure sign of that fan seizing up completely in the not too distant future.
  • Perform lubrication on the fans that require it, which means removing the part from the PC to get to the fan lubrication cover.
  • Install as much RAM as possible as well as a working DVD drive if required.
  • Wipe the hard drive and install Linux Mint/FreeDOS as a free operating system, as we cannot sell the computers with Windows on them.
  • Leave the PC running for a while to determine minimum stability.

This leaves us with a working PC, but it is time consuming, even if it only needs minimal checks and a dust blow out.

It made me think about how far back one can and should go with refurbishing old PC’s. While there are plenty of Pentium 4 and Pentium D based computers out there, they have the disadvantage of running very hot, using a lot of electricity and in the P4’s case, are single threaded chips. Coupled with IDE or SATA1 speed hard drives and the computer is unpleasant to use, even with a freshly installed operating system. Again, while this will provide a computer to a charity or needy person who has never had one before, the economics of using such an old machine weighs heavily against it.

Printers are easier, in the sense that they generally just need a new toner or ink cartridge(s). The problem with older devices though are if they are using the now defunct Paralell Port, or as HP loves to do, not provide drivers for modern versions of Windows. I had to replace all our old HP Laserjet 1018’s in the school because they flat our refused to run stably under Windows 7. I’ve got a 4 colour laser MFP in the office that I have to discard, as the device will not behave properly under anything newer than Windows Vista at best. HP have not put out modern usable drivers for this machine, instead reccomending that you buy a modern, supported printer. This to me is a tragedy, as the device has less than 8000 pages on the counter. There is nothing physically wrong with the machine, but unless we run it on an old version of Windows, it’s become little more than a glorified door stop.

Projectors have the problem of either having their lamps require replacing, colour wheel dies (DLP projectors only) or there’s a problem with the internal LCD panels on LCD models. When you ask for a quote on repair or a new lamp, it actually becomes more cost effective to buy a new projector rather than repair the existing one. Not to mention, most older projectors won’t have modern ports like HDMI or network ports on them, so they are less useful in today’s changing world.

In the end, this is all part of the vicious cycle of technological progress. Unless we can somehow convince manufacturers to better support their products, we are going to be locked into producing tons of e-waste. Reusing old computers is a good start, but there also comes a point where it is no longer viable to use older equipment. One thing that could definitely be improved is much more visibility for e-waste recyclers. Equipment can be properly stripped and salvaged by these firms, who then get the components properly recycled and also avoid polluting areas with toxic chemicals that leech out of electronics as they decompose. It would also help if more people took an interest in repairing their own stuff if it breaks, rather than just throwing it away. There’s a thrill that comes from fixing something with your own hands, a thrill that more people should want to experience.

UEFI booting observations

It’s exam time at my school, which means that things quieten down a bit during said period. This leaves me with some free time during the day to experiment and learn new things, or attempt to do things I have long wanted to do but have not had the time. I’ve used the time during this last week to play around with deploying Windows 7 and 10 to PC’s for the purpose of testing their UEFI capabilities. While Windows 7 can and does UEFI boot, it really doesn’t gain any benefits over BIOS booting, unlike Windows 8 and above. I was more interested in testing out the capabilities of these motherboards, so I could get a clearer idea of hardware issues we may have when we move to Windows 10.

Our network comprises of only Intel based PC’s, so all my experiences so far are based off of that particular point. What I’ve found so far boils down to this:

  • Older generation Intel 5 & 6 chipset series motherboards from Intel themselves are UEFI based, but present interfaces that look very much like the traditional console type BIOS. The only real clue is that under the Boot section, there is an option to turn on UEFI booting.
  • These older motherboards don’t support Secure Boot or the ability to toggle on and off the Compatibility Support Module (CSM) – the UEFI version on these boards predate these functions.
  • I have been unable to to UEFI PXE network boot the 6 series motherboard, haven’t yet tried the 5 series boards. While I can UEFI boot the 6 series to a flash drive/DVD/hard drive, I cannot do so over the network. Selecting the network boot option boots the PC into what is essentially BIOS compatibility mode.
  • The Intel DB75EN motherboard has a graphical UEFI, supports Secure Boot and can toggle the CSM on and off. Interestingly enough though, when the CSM is on, you cannot UEFI PXE boot – the system boots into BIOS compatibility mode. You can only UEFI PXE boot when the CSM is off. This is easy to tell as the network booting interface looks quite different between CSM and UEFI modes.
  • Windows 7 needs the CSM mode turned on for the DB75EN motherboards if you deploy in UEFI mode, so that it can boot, at least from what I’ve found from using PXE boot. If you don’t turn CSM on, the boot will either hang at the Windows logo or will moan about unable to access the correct boot files. I have yet to try and install Windows 7 on these boards from a flash drive in UEFI mode to see what happens in that particular scenario.
  • I haven’t yet had a chance to play with the few Gigabyte branded Intel 8 series motherboards we have. These use Realtek network cards instead of Intel NICs. I’m not a huge fan of Gigabyte’s graphical UEFI, as I find it cluttered and there’s a lot of mouse lag. I haven’t tested a very modern Gigabyte board though, so perhaps they’ve improved their UEFI by now.

UEFI Secure Boot requires that all hardware supports pure UEFI mode and that the CSM be turned off. I can do this with the boards where I’m using the built in Intel Graphics, as these fully support both CSM mode and pure UEFI. Other PC’s with Geforce 610 adapters in them don’t support pure UEFI boot, so I am unable to use Secure Boot on them, which is somewhat annoying, as Secure Boot is good for security purposes. I am probably going to need to start making use of low end Geforce 700 series cards, as these support full UEFI mode, so will support Secure Boot as well.

It’s been a while since we bought brand new computers, but I will have to be more picky when choosing the motherboards. Intel is out of the motherboard game and I am not a fan of Realtek network cards either – this does narrow my choices quite a bit, especially as I also have to be budget conscious. At least I know that future boards will be a lot better behaved with their UEFI, as all vendors have had many years now to adjust to the new and modern way of doing things.

Low end laptop pain

In the course of my job, I’ve been asked on occasion to give feedback or a reccomendation to staff members regarding the purchase of a personal of family laptop. Unfortunately due to the ever changing nature of the IT field, the answers I give aren’t always what the person wants to hear.

I have two questions I ask the person before I make any recommendations:

  1. What do you intend to use the laptop for?
  2. What is your approximate budget for the laptop?
  3. How long do you intend to keep the laptop for?

The answer to the first question is usually a pretty generic answer: typing documents, surfing the internet, preparing lessons, doing research, check email. Using SMART Notebook also comes up now and then. Question 2 usually results in a range of about R4000-R6000 (roughly $380 – $550, exchange rates make this number fluctuate.) Question 3 results in a range of 3 years up to 5 or longer.

I often specify a laptop that is slightly over the asker’s budget, with a justification that spending slightly more results in a better quality laptop that lasts longer and is less likely to drive the person up the wall in the long run. Bottom of the line laptops have to cut so many corners that the experience is often highly frustrating. Low amounts of RAM, lowest end processors, slow mechanical hard drives, low resolution low quality screens, creaky plastic shells, poor trackpads and more leave  and taste in the mouth and that’s just on the hardware side of things. Software wise, the lowest end version of Windows is installed, including the Starter edition in the Windows 7 era. Bundled anti-virus applications, trial ware and lots of often bloated, unneeded software is pre-installed by the manufacturer in order to try and recoup costs and eke out some sort of profit.

Over the last few years, I’ve come to be a firm believer in the power of the SSD. With the right drive, it can often seem like you super charging a laptop that otherwise would need to be replaced due to age. It won’t completely mask age or low specs on a laptop, but it comes close. Windows starts faster, applications load quicker, battery life is extended, noise is reduced and user experience is often improved because you have less of the freezing/lockup sensation after boot. I don’t know if the drives will ever get as cheap as mechanical hard drives, but I believe that even a SATA3 based drive in most consumer laptops would go a long way to increasing user satisfaction across the board. Unfortunately, marketing still spreads the word that 1TB and larger drives are a good thing to have, when in reality not that many people are going to using all that space on a laptop.

As much as I’ve moaned about low quality laptops in this piece, I am reminded that it’s due to the flexibility of Windows that there is such a wide range of devices available at all cost points. From the most painful low end devices that are affordable to most people, all the way up to high end ultrabooks that are extremely pricey but have all the bells and whistles. Competition in the industry plus attrition has also helped to weed out some of the smaller or less interested players, as well as leading to a growing awareness that quality needs to increase in order to stand out against the competition. I can only hope that as time goes on, this trend continues and that the creaky poor machines of the past become nothing more than a bad memory.

SMART Board and USB port fun

February 1, 2015 Leave a comment

Over the last two weeks, we’ve slowly been ramping up our classroom computer swap program at work. 6 year old Core 2 based computers with horrid chassis and power supplies are coming out, being replaced with first gen Core i3 boxes that are quieter, smaller and faster. However, a recent event almost threatened to derail the project.

I placed one of the replacement computers in a class, had it setup as per usual and all was going well. After rebooting however, I noticed that the SMART Board (model SB-680) was not behaving properly. The board was either vanishing just before the computer was fully booted into Windows, or the board would constantly reset and be basically unusable. Changing USB ports did hot help at all, they only gave a temporary fix that lasted until the next boot.

I got the reseller of the board involved to do deeper technical diagnostics, though in honesty it was more a case of handing the problem over to someone else. This past Friday afternoon they arrived and we started a long troubleshooting process. The board was hooked up to the techies laptop and after some time, it settled down and behaved normally. We then swapped out the controller card, swapped out the board itself and tested on the replacement PC. All to no avail, the problem kept coming back. We even tried a new USB booster cable and USB cable, same result.

In desperation, I went into the BIOS to change the USB settings for the newer classroom motherboards. The Intel DQ57TM motherboards had been running completely fine for the last 4 years without issues in both our computer labs, so I couldn’t understand why it would give issues now. They are all flashed to the latest firmware Intel offers, so there would be no fix that way. It turns out that one simple BIOS setting may have caused the issue.

When I setup the computer via network boot and install using Microsoft’s Deployment Toolkit, I had to set the USB Backward Compatibility option to Disabled in the BIOS, as the keyboard and mouse were non functional in Windows PE. After the whole install process was over, I didn’t bother to change the setting again, since I didn’t believe it would affect anything. Suffice to say, enabling the option caused Windows to install a whole bunch of extra USB root hubs and stuff after the reboot. In turn, this then let the SMART Board behave properly. Our reseller’s techie learned something new, as did I. Now I know that I must make sure the setting goes back to Enabled before installation in the classroom, so that headaches can be avoided.

The truly bizarre thing however is that the problem only seems to be triggered if the SMART Board is hooked up to the computer via an USB booster extension cable. If the board is close enough to the computer desk and doesn’t use the booster extension, the board seems to work fine with the setting at Disabled. I have 2 classrooms where such is the case, and neither of those rooms have reported issues with their boards since the school year started.

Another quirky problem to add to the knowledge base of fun when it comes to SMART Boards.

Ddrescue to the rescue

September 20, 2014 Leave a comment

A few weeks back, thanks to the blue screen caused by Microsoft’s batch of faulty updates, I formatted a teacher’s class computer and redid it from scratch – this was before I managed to find the work around to fix the blue screen issues. The computer was running fine since then, until this past week. The teacher started complaining bitterly about how slow the PC had become. I checked for malware, as well as for any other crappy software that may have been causing the slow down. I found nothing. I asked the teacher to monitor the PC, while I investigated further.

A few days later, the teacher was even more frustrated with the machine. Now it was taking forever to start up, shut down and was hanging on applications. I looked through Event Viewer, only to discover ATAPI errors were being logged. Not just one either, there were dozens of errors. The moment I saw this, I knew that the hard drive was on the way out. While the SATA port could be faulty or even the cable, the odds of those being the culprits were rather low. Too many bad experiences in the past have taught me that it is almost always the drive at fault.

I procured a spare drive and decided the quickest fix was to simply clone one drive to the other. Using Clonezilla I tried to do the clone. On my first pass, about 75% of the way through the PC looked like it went to sleep and I couldn’t see any output on the monitor. I couldn’t revive the PC, so I rebooted and tried the procedure again. This time, it got up to about 97.5% before it crashed out. Based on what I saw, Clonezilla was hitting bad sectors, corrupt files or the mechanical weakness in the drive. Now I was getting worried, because any more cloning attempts could hasten the end of the faulty drive. Not only that, it was wasting time. Setting up the PC from scratch again was my last resort, since it would take hours. Before I gave up and did that, I remembered Ddrescue.

I had tried to use Ddrescue on my home computer more than a year ago when the hard drive holding my Windows 8 install died. Sadly, that drive was too damaged even for Ddrescue to be able to save. I was hoping that this hard drive of the teacher hadn’t yet hit that stage.

I ran Ddrescue and then waited as the drive literally copied itself sector by sector over to the new drive. What I wasn’t aware of is that Ddrescue doesn’t understand file systems – it just copies raw data from one drive to another. This means it will copy any file system, but in order to do so, it must copy every block on the disk. A tool like Clonezilla will understand a file system and only copy used data blocks, therefore saving lots of time by not copying essentially blank space.

Ddrescue did hit one patch of bad data, but was able to continue going, then came back at the end to try and pull out what it could. Thankfully, whatever bad data there was wasn’t too major, and Ddrescue completed successfully. Booting from the new drive was a success, and best of all, the speed was back again. I did run a sfc /scannow at the command prompt to check for any potential corrupt system files. SFC did say it fixed some errors, and I rebooted. Apart from that, it looks like I managed to save this system in the nick of time. The old hard drive was still under warranty, and has been returned to the supplier. He can return that drive and get a replacement for us, which will become a new hot spare for some other classroom.

Firmware update fun

A couple of years ago, flashing any device’s firmware was often a difficult, frustrating and sometimes downright dangerous task. Always hoping that the device wouldn’t get bricked due to some unknown bug in the firmware, or worse still, a power failure right in the middle of the flash.

These days for things like motherboards, it can be as easy as flashing inside Windows, or using the built in feature on the motherboard. Generally speaking, you no longer have to use MS-DOS and try to find floppy disks or use an alternative, it just works. Intel motherboards in particular are usually very straight forward when it comes to this: run the Express Update inside Windows. Windows reboots, the motherboard flashes itself, reboots and you are back into Windows. No other intervention required.

Thus it was a bit irritating a few weeks ago when I decided to flash some of Z68 motherboards to their latest (and last) BIOS version. I ran the Express Update inside Windows as I’ve done countless other times. Computer reboots, fails to flash the firmware and then goes back into Windows. No matter what I tried, the firmware would not update. My next step was to download the *.bio file from Intel’s website, place it on a flash drive and press F7 during boot, so that I can update the BIOS. This didn’t work as well:

WP_20140715_002

That leaves me only one option – use Intel’s Iflash tool. I don’t have a copy of MS-DOS lying around, and I didn’t feel like going through many hoops just to get a flash drive set up correctly. I discovered that Iflash works with FreeDOS, so I simply placed the files on a flash drive I have set up with Ultimate Boot CD, which includes FreeDOS. Run Iflash, the computer reboots, but then sits for a while doing nothing. I was about to reset the computer when I noticed the power led on the computer doing a slow pulse. I remembered that Intel motherboards generally do this when updating the firmware or when in sleep mode, so I let the process go on. Sure enough, after about 3 minutes, the computer rebooted by itself. The latest BIOS was now installed and working correctly.

Thankfully there was only about 5 computers to do this on. I’m not sure why this model motherboard was so fussy, but it’s done now.

Goodbye XenServer

April 28, 2014 1 comment

In my last post I mentioned that I did a server migration, and that the hub around which everything turned was the server running Citrix XenServer. As also mentioned, I replaced XenServer with Microsoft’s Hyper-V. Let me explain some of my thinking around why I made that choice.

Xen has been around since 2003, making it a pretty mature hypervisor. When Citrix got involved, the project only grew and became more powerful. In fact, it often seemed like XenServer was the only competition to VMware. When we decided to make use of virtualization at the school, we had the choice of VMware, XenServer and the original version of Hyper-V. VMware was fussy on the hardware, Hyper-V didn’t have decent management tools or Linux at the time, all while XenServer just worked out the box.

Fast forward a couple of years though and the scene has changed quite a bit. Hyper-V has made huge strides and is fighting it out with VMware for top dog in the virtualization world. Linux KVM has come along in leaps and bounds and is pretty popular for running Linux VM’s. XenServer unfortunately began to look like a deer in headlights, not knowing where its place is.

The free edition of XenServer was a good product, but it often felt like Citrix was keeping back that one or two juicy features that would just make it so much more excellent. A business generally wouldn’t be adverse to purchasing a more advanced version, but it’s not so easy in a school where resources are often a lot tighter.

I decided to move the server over to Hyper-V because a) it comes free with Server 2012 R2 and b) all of the servers I was virtualizing are Windows servers. It just seemed to make better sense to me to run Windows servers under the platform made by the same people who make Windows. There were other benefits as well, namely that I could run Windows applications for setting up or monitoring the RAID array, easier network management and better support of Windows guests. Upgrading the Xen integrations tools after an OS update was always a fingers crossed moment, hoping that it wouldn’t affect the OS. Sometimes the tool updates ran extremely slowly, something which was always frustrating.

With XenServer, it often took a long time for later versions of Windows to be officially supported. Server 2008 R2 for example would be considered experimental for a release and then upgraded to official support in the next release. However, the gap between major releases of XenServer could be a year or more. With Hyper-V, all Microsoft have to do is release an update of their integration tools to support a new version of Windows – the whole OS doesn’t need to be upgraded.

Since migrating, Hyper-V has behaved itself. My Exchange server initially didn’t like the migration, tending to lock up at random intervals, while all the other servers behaved themselves. Turns out that the Hyper-V tools conflicted with the Eset NOD32 version I had installed on the server, which is a very old version to be honest. Removing NOD32 solved that freezing problem, and now all servers are behaving themselves nicely. Best of all, the management tools are all nicely built into Windows, or are a simple download away. The overall server gets managed with Server Manager, while Hyper-V gets managed using its own console. XenCenter was a great management console, but it felt like precious little had changed between versions over the years.

To Citrix, I say thanks for giving away XenServer all the years. There were quirks and minor issues, but XenServer pretty much worked as promised. Good luck with the transition back to a more open source model, I hope it helps to keep Xen relevant in the cutting edge market of virtualization.