Salvaging old equipment a.k.a dumpster diving

Last week I watched a couple of videos on YouTube where old computers were rescued from the kerb or dumpsite and refurbished for use. This saves on e-waste and also provides cheap computers to those who cannot afford a new machine. This got me thinking about all the equipment I have discarded, sold or donated while at my school, as well as the actual value of refurbishing old equipment.

As time has gone on, I estimate I’ve gotten rid of over 100 old computers, ± 4o projectors, ± 20 printers and countless individual parts such as dead hard drives, power supplies, motherboards etc. Some of this went into the the trash, while others were donated or sold off to raise some funds for the school. In fact, we cleaned up 6 computers for sale over the first week of holidays. However, the process is time consuming, especially with old equipment like that. The process goes something like this:

  • Physically inspect the chassis to look for loose panels, missing screws, worn/sticky buttons etc.
  • Open the chassis and blow out all the dust using our air compressor, then perform a visual inspection of the motherboard, looking for swollen/blown capacitors, loose cable connections etc.
  • Power on the PC and listen for fans that need lubrication. Most often this is the power supply, graphics card or the chassis fan. Fans that grind are a sure sign of that fan seizing up completely in the not too distant future.
  • Perform lubrication on the fans that require it, which means removing the part from the PC to get to the fan lubrication cover.
  • Install as much RAM as possible as well as a working DVD drive if required.
  • Wipe the hard drive and install Linux Mint/FreeDOS as a free operating system, as we cannot sell the computers with Windows on them.
  • Leave the PC running for a while to determine minimum stability.

This leaves us with a working PC, but it is time consuming, even if it only needs minimal checks and a dust blow out.

It made me think about how far back one can and should go with refurbishing old PC’s. While there are plenty of Pentium 4 and Pentium D based computers out there, they have the disadvantage of running very hot, using a lot of electricity and in the P4’s case, are single threaded chips. Coupled with IDE or SATA1 speed hard drives and the computer is unpleasant to use, even with a freshly installed operating system. Again, while this will provide a computer to a charity or needy person who has never had one before, the economics of using such an old machine weighs heavily against it.

Printers are easier, in the sense that they generally just need a new toner or ink cartridge(s). The problem with older devices though are if they are using the now defunct Paralell Port, or as HP loves to do, not provide drivers for modern versions of Windows. I had to replace all our old HP Laserjet 1018’s in the school because they flat our refused to run stably under Windows 7. I’ve got a 4 colour laser MFP in the office that I have to discard, as the device will not behave properly under anything newer than Windows Vista at best. HP have not put out modern usable drivers for this machine, instead reccomending that you buy a modern, supported printer. This to me is a tragedy, as the device has less than 8000 pages on the counter. There is nothing physically wrong with the machine, but unless we run it on an old version of Windows, it’s become little more than a glorified door stop.

Projectors have the problem of either having their lamps require replacing, colour wheel dies (DLP projectors only) or there’s a problem with the internal LCD panels on LCD models. When you ask for a quote on repair or a new lamp, it actually becomes more cost effective to buy a new projector rather than repair the existing one. Not to mention, most older projectors won’t have modern ports like HDMI or network ports on them, so they are less useful in today’s changing world.

In the end, this is all part of the vicious cycle of technological progress. Unless we can somehow convince manufacturers to better support their products, we are going to be locked into producing tons of e-waste. Reusing old computers is a good start, but there also comes a point where it is no longer viable to use older equipment. One thing that could definitely be improved is much more visibility for e-waste recyclers. Equipment can be properly stripped and salvaged by these firms, who then get the components properly recycled and also avoid polluting areas with toxic chemicals that leech out of electronics as they decompose. It would also help if more people took an interest in repairing their own stuff if it breaks, rather than just throwing it away. There’s a thrill that comes from fixing something with your own hands, a thrill that more people should want to experience.

UEFI booting observations

It’s exam time at my school, which means that things quieten down a bit during said period. This leaves me with some free time during the day to experiment and learn new things, or attempt to do things I have long wanted to do but have not had the time. I’ve used the time during this last week to play around with deploying Windows 7 and 10 to PC’s for the purpose of testing their UEFI capabilities. While Windows 7 can and does UEFI boot, it really doesn’t gain any benefits over BIOS booting, unlike Windows 8 and above. I was more interested in testing out the capabilities of these motherboards, so I could get a clearer idea of hardware issues we may have when we move to Windows 10.

Our network comprises of only Intel based PC’s, so all my experiences so far are based off of that particular point. What I’ve found so far boils down to this:

  • Older generation Intel 5 & 6 chipset series motherboards from Intel themselves are UEFI based, but present interfaces that look very much like the traditional console type BIOS. The only real clue is that under the Boot section, there is an option to turn on UEFI booting.
  • These older motherboards don’t support Secure Boot or the ability to toggle on and off the Compatibility Support Module (CSM) – the UEFI version on these boards predate these functions.
  • I have been unable to to UEFI PXE network boot the 6 series motherboard, haven’t yet tried the 5 series boards. While I can UEFI boot the 6 series to a flash drive/DVD/hard drive, I cannot do so over the network. Selecting the network boot option boots the PC into what is essentially BIOS compatibility mode.
  • The Intel DB75EN motherboard has a graphical UEFI, supports Secure Boot and can toggle the CSM on and off. Interestingly enough though, when the CSM is on, you cannot UEFI PXE boot – the system boots into BIOS compatibility mode. You can only UEFI PXE boot when the CSM is off. This is easy to tell as the network booting interface looks quite different between CSM and UEFI modes.
  • Windows 7 needs the CSM mode turned on for the DB75EN motherboards if you deploy in UEFI mode, so that it can boot, at least from what I’ve found from using PXE boot. If you don’t turn CSM on, the boot will either hang at the Windows logo or will moan about unable to access the correct boot files. I have yet to try and install Windows 7 on these boards from a flash drive in UEFI mode to see what happens in that particular scenario.
  • I haven’t yet had a chance to play with the few Gigabyte branded Intel 8 series motherboards we have. These use Realtek network cards instead of Intel NICs. I’m not a huge fan of Gigabyte’s graphical UEFI, as I find it cluttered and there’s a lot of mouse lag. I haven’t tested a very modern Gigabyte board though, so perhaps they’ve improved their UEFI by now.

UEFI Secure Boot requires that all hardware supports pure UEFI mode and that the CSM be turned off. I can do this with the boards where I’m using the built in Intel Graphics, as these fully support both CSM mode and pure UEFI. Other PC’s with Geforce 610 adapters in them don’t support pure UEFI boot, so I am unable to use Secure Boot on them, which is somewhat annoying, as Secure Boot is good for security purposes. I am probably going to need to start making use of low end Geforce 700 series cards, as these support full UEFI mode, so will support Secure Boot as well.

It’s been a while since we bought brand new computers, but I will have to be more picky when choosing the motherboards. Intel is out of the motherboard game and I am not a fan of Realtek network cards either – this does narrow my choices quite a bit, especially as I also have to be budget conscious. At least I know that future boards will be a lot better behaved with their UEFI, as all vendors have had many years now to adjust to the new and modern way of doing things.

Keeping Adobe Flash Player updated on a network

The Adobe Flash Player plugin is a pain in the arse. It’s a security nightmare, with more holes in the codebase than Swiss cheese. It seems every other week Flash makes the headlines when some or another security vulnerability is discovered and exploited. Cue the groans from network admins and users around the world as Flash has to be updated *yet* again. Unfortunately, one can’t quite get permanently rid of it just yet, as too many websites still rely on it. While you could get away with not using it at home, in a school where multiple people use a computer and visit different websites, one doesn’t have much choice really but to make sure Flash is installed.

On Windows 7 and below, the situation with Flash is a bit crazy. There’s a version for Internet Explorer (ActiveX plugin), a version for Firefox that is installed separately and Google Chrome bundles its own version – I’m not sure about smaller or niche browsers, but I think modern Opera inherits Flash via its relationship with Chrome’s engine. Thankfully with Windows 8 and above, Flash for Internet Explorer is distributed via Windows Update. It’s automatic and contains no 3rd party advertisements, anti-virus offers, browser bundling etc – all things Adobe have done in the past with their Flash installers. Trying to install Flash from Adobe’s website on Windows 8 and above will fail, which at least may help to kill off the fake Flash installer routine used by malware authors to trick unsuspecting users.

The usual method of installing Flash is highly cumbersome if you run a large network – not to mention that EXE files are much less flexible than MSI files for deployment and silent install options. Thankfully Adobe do make Flash Player in MSI format, but it’s not easy to get hold of directly. You have to sign a free enterprise deployment license to be able to legally distribute Flash and Reader in your organisation. The problem becomes how to distribute the updates especially if you aren’t running System Center or another product like that. Enter WSUS Package Publisher, indispensable if you make use of WSUS on your network.

WPP allows you to use the enterprise update catalogs Adobe and some other vendors offer. Using this, you essentially push the updates into your existing WSUS infrastructure, where it ends up delivered to the client computers like any other update. One thing you need to do is tweak the update as you publish it, so that it isn’t applicable to computers running Windows 8 upwards – if you don’t do this, the update will download on newer Windows versions, but will fail to install repeatedly and will need to be hidden. The other thing I’ve also discovered that needs to be fixed is that the silent install command line switch needs to be deleted. When a MSI file is delivered via WSUS, it is automatically installed silently. I discovered this the hard way, since one of the Flash updates I imported was failing to install on every computer. Turning on MSI logging and searching for the error code eventually lead me to discovering what was wrong, after which I corrected the problem and now know what to do with every new update that comes out for Flash.

Since using WPP, I’ve felt happier about the safety of my network, as I can usually get Flash pushed out with 2-3 days of the initial download. This is far better than having to visit each computer manually and keeping Flash up to date that way!

Curating digital photos

The rise of digital photography over the last 15 or so years has had many side effects. The most obvious is that analog film cameras have largely, though not completely vanished. No longer did you have to pay and wait for processing, hoping that your photos came out, or that you loaded the correct speed film. Digital gave you instant feedback either on the camera or once the files were transferred onto your computer.

My school is 59 years old as of this post. We have an archive of digital photos reaching back to the year 2000. The previous 43 years of school history is on film, but sadly most of whatever was taken was either lost or destroyed as the years went by. It seems there wasn’t much effort to archive and protect the slides and negatives at the time. While there are some slides and negatives, a large portion of the school’s history may be irredeemably lost. This is great pity, as what I have found, scanned and posted online has brought many happy memories back to people, most of whom may never have seen those photos.

Digital photos are far easier to store, backup and replicate to more than one location, which gives a huge amount of protection over analog slides and negatives. However, with the increase in megapixels and sensor quality over the years, combined with larger memory cards has lead to an unforeseen consequence: we have exponentially more photos now than we have ever had before. It’s so easy to take hundreds of shots of events now compared to the days of film when you were limited by how many spools you had and how much you were prepared to pay for developing and printing. Not only that, but since digital is now so ubiquitous, more people can contribute photos than ever before.

To give an example of this point: imagine a school sports day. Lots of activities all over the show that one photographer can’t cover on their own. Now imagine that there’s 5-10 students taking photos as well, covering all areas. Say each person takes 250 photos and suddenly you can end up with a total of 1500-2750 photos from one event – and that’s using a conservative figure. Obviously not all of these photos are going to be useful, which is where the time consuming art of weeding out the bad photos comes in. Most amateur student photographers I’ve spoken to never take the time to actually curate their photos. In fact, most staff members who have taken photos of school events haven’t done so either. It’s too easy to simply dump the whole contents of a memory card into a folder on the server and leave it there. This is what has happened with our digital archives over the years, to the point where we have something like 138000 files taking up over 480GB of space on our photos network share.

That number was a lot higher before I decided to take on the task of curating and cleaning up the mess the share had become. Not all of the files on the drive were photos, as I’ve deleted a number of Photoshop PSD’s, PowerPoint presentations, AVI and MP4 movie clips and other odds and ends. I’ve also deleted a huge amount of duplicates. Last week I brought home a fresh copy of all the files in the drive and imported it into Adobe Lightroom. It took a long time, but Lightroom calculated there was something like 128000 odd photos. I’m not sure about the discrepancy between that figure and what Windows Explorer tells me, but I think there may have been more duplicates that Lightroom ignored on import.

Now with the power of Lightroom, I’ve been able to really start going through some of the albums. I’ve curated 5 sub folders now, rejecting and deleting up to half of the photos in each case. Factors I look for when deleting photos include the following:

  • Focus. My most important metric really. 99% of the time, I’m going to delete out of focus photos.
  • Resolution. Photos of 640×480 or smaller are of no real use to us, even as an archive. I made the call to delete these, even if they are the only record of an event.
  • Motion blur. Too much of this ruins the photo. This usually occurs because shutter speed is too slow and it leads to a strange looking photo.
  • Framing. Things like cut off heads, people too distant, people partially in the edges of photos and so forth usually end up being binned.
  • Damaged files. Caused by bit-rot or due the camera/memory card being faulty, these are tossed.
  • Noise. Too much digital noise due to high ISO speeds or older sensors lead to very unpleasant, soft and noisy photos. I rescue where I can, but otherwise these too are binned.
  • RAW files. RAW files are fantastic for many things, but as part of an archive they are problematic. Every camera manufacturer has their own RAW format, which doesn’t always open well in 3rd party software. The alternative DNG format as created by Adobe is an option, but unless you take extra steps, they aren’t easily viewable. By contrast, JPEG files are universal and can be opened on just about any platform in existence.
  • Severe over or under exposure. Files that are extremely exposed in either direction are usually useless, especially if they are in JPEG form right out the camera.
  • Too similar photos. When you take photos in burst mode, you’ll often end up with many photos that are near identical, often only with small variations between frames. I usually pick the best 1 or 2 of the lot and delete the rest. This is especially true in sports/action shots.

I still have an incredibly long way to go. I’ve deleted well over 20000 files by now, but a mountain is still in front of me. Of course, as 2016 goes on and more photos get added to the 2016 archive, that mountain is only going to grow. Still, I’ve made a start and I am happy with what I’ve done so far. Thanks to the process, I’ve been able to upload many albums of decent, usable photos to our Facebook pages so that pupils, parents and staff can view, tag, share and download them.

In closing, I would suggest that any person who enjoys their photo collection to take the time to properly curate said collection. It isn’t always easy to delete photos, especially if they are the only one of a special event/person. However, unless one learns to be decisive, the collection is just going to eventually grow to the point of overwhelming you. Take time to savour the quality, not the quantity.

Updating Windows at the source

Since the release of Windows Vista, Windows has been installed by using a compressed image file, known as a WIM file. This is what allows Microsoft to ship one disk containing the home and other versions of Windows, unlike the multiple disks of the XP era. What makes a WIM file even more useful is that it can be mounted inside a running copy of Windows and have patches and drivers injected directly into the image. This is extremely handy when you realise that Windows 7 has been out for almost 6 years now and has a couple of hundred patches out there. Anything that cuts down the wait for Updates to install is a good thing, as well as having a more secure system out the box.

There are a couple of limitations however:

  1. You can’t inject all the update patches offline. Certain updates can only be installed when Windows is running.
  2. NET Frameworks cannot be injected offline. These will need to be installed and patched after Windows is up and running.
  3. You can only inject patches if they are in CAB or MSU format. EXE files are not usable here.

To update Windows 7 (or 8 or Server editions for that matter) you will need the following:

  • Windows 7 media or ISO file. I don’t have access to OEM disks so cannot say if those can be updated. What you really need is the install.wim file, found in the \Sources directory on the disk. It’s the single biggest file on the disk.
  • Windows 7 Automated Installation Kit or the later Windows 8.1 Assessment and Deployment Kit. You need this for the DISM tools which services the WIM file.
  • Access to the updates for Windows 7. There are many ways to get these, but I have found that looking the C:\Windows\SoftwareDistribution\Download folder on a patched machine to be one of the better ways to get the updates. Other tools have had mixed success for me.
  • Hard drive space and patience. Injecting updates, committing the changes to the WIM file and optionally recreating the ISO file will take time.

Here’s my step by step guide on how to do this update procedure. A note before I begin however. My guide is a little longer than strictly speaking necessary. If you have access to ISO editing software, you could just replace the install.wim file and be done. However, I am going to include the steps to rebuild an ISO image, including the option to make it UEFI boot compatible.

Updating Windows

  1. Make 3 folders on a hard drive. For example C:\Win7, C:\Updates and C:\Mount.
  2. Copy the install.wim file from your ISO or DVD to C:\Win7.
  3. Install the Windows 7 AIK or Windows 8.1 ADK. Specifically, we are looking for the Deployment Tools option. We don’t need the rest for this process.
  4. Place all the updates for Windows 7 into the C:\Updates folder.
  5. Open up the “Deployment and Imaging Tools Environment” shortcut as an Administrator. The DISM commands will only run with Admin approval.
  6. Run the command dism /get-wiminfo /wimfile:C:\Win7\install.wim
    This will tell us about the various Windows editions present in the WIM file. Depending on the disk, it may include multiple editions or only 1. Take note of the index number which corresponds to the edition of Windows you want to update, we will use it in the next command.
  7. dism /mount-wim /wimfile:C:\Win7\install.wim /index:X /mountdir:C:\Mount (replace X with the number you want from step 6) DISM will mount the image edition at the C:\Mount folder
  8. dism /image:C:\Mount /add-package /packagepath:C:\Updates
    DISM will now start to add all the MSU and CAB files it finds in the C:\Updates directory and apply them to the mounted image. This will take some time, so feel free to take a break. Some updates may cause an error; these updates are only meant to be installed when Windows is running. You will need to find out what updates caused the error and remove them. Type dism /unmount-wim /mountdir:C:\Mount /discard to discard all the changes and follow steps 7 & 8 again until the process is error free.
  9. dism /unmount-wim /mountdir:C:\Mount /commit
    This will commit the changes, save and unmount the WIM file.
  10.   If you want to update another edition of Windows 7, go back to step 7 and use another index number. Go through steps 7-9 again for all editions you want to update.

Building the new ISO for Windows 7

If you are planning to use the updated WIM file with Microsoft Deployment Toolkit, you are good to go and can use the updated install.wim file in conjunction with the rest of the Windows setup files. Otherwise, you’ll need to create a new ISO image that can be used virtually, burned to DVD or used on a USB flash drive for install purposes.

Open up the “Deployment and Imaging Tools Environment” shortcut as an Administrator again. Run the following command to make the ISO file that can boot on traditional BIOS based systems or on UEFI systems. For the most modern UEFI systems, make sure Secure Boot is disabled before you install Windows 7, as it is not Secure Boot capable.

For this step, copy all the files from your Windows 7 DVD or ISO to the Win7 directory, but leave out the old install.wim file or you will have wasted your time.

oscdimg.exe -u2 -udfver102 -bootdata:2#p0,bC:\Win7\boot\etfsboot.com#pEF,ebC:\Win7\efi\microsoft\boot\efisys.bin -o –lVOLUME_LABEL C:\Win7 C:\Win7\Win7.iso

Replace VOLUME_LABEL with something of your choice.

You can now burn the ISO file to DVD, use it on a flash drive or as an ISO with any VM software.

I have not tried this procedure with Windows 8.x, but I believe it should work the same way as the file layout of the relevant files and folders are near identical.

The Windows 10 upgrade experience

On Wednesday 29 July 2015, a new chapter opened up in the history of Microsoft’s Windows. Windows 10 was unleashed on the world, Microsoft’s attempt to remedy the largely cool reaction to Windows 8, as well as stay relevant (at least in the eyes of a lot of tech bloggers) in the new app centric world. The return of the Start Menu, an unprecedented public participation process via the Windows Insider program, free upgrades for a year, DirectX 12 for gamers and many more features all combined to build up a hype that has not been seen for a long time in the Windows world.

Like millions of other people, I reserved my copy of the Windows 10 upgrade via the app in the system tray that appeared after a round of Windows Updates a few months back. The idea was that this application would trickle download Windows 10 in the background as soon as the system went RTM, so that on launch day you’d be ready to upgrade immediately. Only problem is that the trickle download process started 1-2 days before the launch of Windows 10, which meant that with my slow ADSL speed, it would be some time before I’d be ready to go, let alone the chance that I’d be in one of the later waves of the rollout. This is probably due to the fact that build 10240 only went RTM 2 weeks before the global rollout. Either way, I was impatient to get going.

Thankfully Microsoft thought clearly and made ISO images available for direct download or via the Windows 10 Media Creation Tool. I snagged a copy of the Media Creation Tool and used it to download a copy of Windows 10 at work, where I have access to a faster internet connection. Once the ISO file was built by the tool, I burned it to DVD for 3 other staff members who were interested. It’s legal to do this by the way, since each person would be having their own upgrade key made during the upgrade process. For myself, I used the excellent Rufus utility to burn the image to a flash drive. Although the Media Creation Tool can burn the image to flash drive, I’ve come to trust Rufus, especially thanks to its ability to create properly booting UEFI capable media.

Once at home, I simply inserted the flash drive, double clicked on setup.exe and let the upgrade process run. I had previously been running Windows 8.1 with all updates applied. The installation process ran smoothly and took about half an hour to move all my files, upgrade itself and get to the desktop. All of my software remained installed and I haven’t yet had any compatibility issues software wise. I did have some issues with my PC’s built in Bluetooth adapter, but a couple of hours after the upgrade, a driver had been installed in the background and the adapter was good to go again. After the upgrade, I did manually install Nvidia’s latest graphics driver, since I already had it downloaded and couldn’t wait on Windows Update to deliver the driver.

So far, I mostly like Windows 10. It’s been stable despite the upgrade, no blue screens or crashes. As mentioned, all my software has remained in working without issue. Speed wise it feels a little faster than Windows 8.1, but not much. The speed may be more impactful on users coming from Windows 7 or earlier. My biggest real gripe at the moment with Windows 10 is the severe regression in the OneDrive client, a very well moaned about topic on the net. Windows 8 and 8.1 spoiled me in that regards with placeholder sync that let me see the files that were on my OneDrive, without actually needing to download them. The Windows 10 version basically takes us back to the Windows 7 version of the client where you have to chose which folders and files to sync, which will then chew up space on your hard drive. I am not happy at all with this change, but I am holding out that the new client that should be here by the end of the year will offer a better experience.

One small note: my copy of Windows 10 wouldn’t activate until a day after the install. While I kept thinking that somehow it was related to my Windows 8.1 key, it was simply a case of the fact that the activation servers were getting hammered into oblivion. Over 14 million people upgraded in the first 24 hours, so I am not surprised that I struggled to activate. I am assuming that now, almost 2 weeks later, activation should be happening immediately as per normal again.

It’s been a common refrain that I’ve seen on the net from reviews that if there’s one thing Windows 10 needs, it’s that it needs more polish. Lots of little fit and finish issues keep cropping up older legacy parts of Windows are moved into the modern framework. Different right click menus, a System Settings App that isn’t quite Control Panel yet, out of place icons etc. all need some time and attention before Windows 10 becomes its own unique system. With the promise of Windows as a Service, it’s likely that many of these issues will go away with time as the system keeps being updated and improved. One thing is for sure, it’s going to be an interesting ride indeed.

The long hunt for a cure

At the end of March 2014, our school took ownership of a new Intel 2600GZ server to replace our previous HP ML350 G5 server which was the heart of our network. The HP had done a fantastic job over the years, but was rapidly starting to age and wasn’t officially supported by Windows Server 2012 R2. Our new server has 32GB of RAM, dual Xeon processers, dual power supplies, 4 network ports and a dedicated remote management card. Although a little pricier than what I had originally budgeted for, it matched what the HP had and would earn its keep over the next 5-7 years worth of service.

After racking and powering up the server, I installed firmware updates and then Server 2012 R2. Install was quicker than any other server I’ve done in the past, thanks to the SSD boot volume. After going through all the driver installs, Windows Updates and so on, the server was almost ready to start serving. One of the last actions I did was to bond all 4 network ports together to create a network team. My thinking was that having a 4Gb/s team would prevent any bottlenecks to the server when under heavy load, as well as provide redundancy should a cable or switch port go faulty. Good idea in theory, but in reality I’ve never had a cable or port in the server room go bad in 6+ years.

Looking back now, I’m not sure exactly why I bothered creating a team. While the server is heavily used as a domain controller, DHCP, DNS and file server, it never comes close to saturating 1Gb/s, let alone 4. Almost every computer in the school is still connected at 100Mb/s, so the server itself never really comes under too much strain.

Either way, once everything was set up, I proceeded to copy all the files across from the old HP to the new Intel server. I used Robocopy to bulk move files, and in some cases needed to let the process finish up over night since there were so many files, especially lots of small files. Data deduplication was turned on, shares were shared and everything looked good to go.

When school resumed after the holidays, the biggest problem came to light right on the first morning: users being unable to simultaneously access Office files. We have a PowerPoint slideshow that is run every morning in the register period that has all the daily notices for meetings, events, reminders, detention etc. Prior to the move, this system worked without fault for many years. After the move, the moment the 2nd or 3rd teacher tried to access the slideshow, they would get this result:

WP_20140409_001
Green bar of doom crawling across the navigation pane, while this odd Downloading box would appear and take forever to do anything and would tend to lock Explorer up. Complaints naturally came in thick and fast and the worst part is that I couldn’t pinpoint what the issue was, aside from my suspicion that the new SMB3 protocol was to blame. I had hoped that the big Update 1 update that shipped for Windows 8.1 and Server would help, but it didn’t. Disabling SMB signing didn’t help either. At one point, my colleague and I even installed Windows 8.1 and Office 2013 on some test machines to try and rule out that possibility, but they ended up doing the same thing. As a stop gap measure, I made a dedicated Notices drive on the old HP, which was still running Server 2008, which ran fine with concurrent access to the same file. Online forums weren’t any real help and none of the other admins in Cape Town I spoke to had encountered the problem either.

In the last school holidays just gone by, we finally had a decent gap between other jobs to experiment on the new server and see if we could correct the problem. I broke the network team, unplugged 3 of the 4 cables and disabled the LACP protocol on the switch. After reassigning the correct IP to the now single network port, we did some tests on opening up files on 2 and then 3 computers at the same time. We opened up 15MB Word documents, 5MB complicated Excel files, 200MB video files and more. The downloading box never showed up once. Unfortunately, without heavier real world testing by the staff, I don’t know if the problem has been resolved once and for all. I am intending to move the Notices drive during the next school holiday and we will see what happens after that.

Chalk one up for strange issues that are almost impossible to hunt down.

Follow

Get every new post delivered to your Inbox.