Upgrading Windows 10 via WSUS

November 15, 2016 Leave a comment

Windows 10 is supposedly the “last” consumer Windows edition Microsoft will release. While the version will stay as 10, over time the whole OS will mature, grow and mutate into something that will look and feel very different from the original release of July 2015. One side effect of this is that in a corporate environment using WSUS, it becomes possible for new versions of Windows 10 to be deployed as an in place fully automatic upgrade, the same way any other patch or service pack is installed. I was curious to see how this worked, so I approved the Anniversary Update (also known as version 1607) for installation at work and let my PC download the update.

Sure enough, the process was the same as what my home PC went through when it upgraded to the 1607 update. A couple of update screens and quite some time later, I was back at my desktop, duly upgraded. Everything was still in place, bar the RSAT pack which had to be updated to a version compatible with v1607. Overall, an extremely smooth and hands free process, just time consuming. I imagine it would easily take twice or thrice as long if the machine runs on a mechanical HD and not a SSD.

That being said, there was one major problem with the 1607 update – checking for updates from WSUS broke due to a bug in 1607. Windows 10 1607 would start to search for updates from the configured WSUS server, only to have multiple services in the background crash repeatedly, with no indication to the user. To the end user, it simply looks like the search is stuck at 1% and never moves from there. Apparently, if one leaves the process running long enough, updates will eventually download. This is obviously an unacceptable bug and Microsoft were made aware of it. They promised a fix in one of the monthly update roll ups, which was subsequently delivered and verified as having fixed the problem. Now you have a chicken and egg situation: deploy the 1607 update via WSUS, but then struggle after that, since you need the Cumulative Update to fix the problem.

You could manually install the update, but this becomes unwieldy in a large organisation. If deploying Windows 10 via deployment tools, you could make sure that the base image has the update injected already, which prevents the issue from cropping up in the first place. Sadly, the 1607 update is delivered from WSUS as an encrypted ESD file. While it is possible to decrypt this and inject the update, I don’t know if it’s possible to convert that back in to an ESD file. Even if you could, the checksum wouldn’t be valid and WSUS would probably fail to work with the modified file.

There’s always a possibility Microsoft could revise the 1607 update in WSUS so that the ESD file comes with the last Cumulative Update installed so that it works correctly out the box. I recall something like this happening with the November 1511 update, which I declined as it was another 3-4 GB download. Unfortunately, one doesn’t know when or even if this will happen. It’s also possible the problem will never be fixed. With the Creators Update due out early 2017 (March?) it’s possible that Microsoft uses that as the new baseline. If I’m correct, once the Creators Update is approved in the WSUS console, it will supersede the Anniversary Update, so the problem should be solved by bypassing the 1607 update.

I look forward to eventually rolling out Windows versions like this, though I think it will be beneficial if every computer had a SSD inside it first. Mechanical hard drives really do slow things down these days. A nice side effect of this is that Windows shouldn’t end up suffering from “Windows rot” as the Windows directory is replaced with each major upgrade. This should keep performance up compared to something like Windows 7 that gets bogged down after years worth of updates. Interesting times ahead…

Categories: Software Tags: ,

SparkPost

November 6, 2016 Leave a comment

About a month ago, I received an email at work from the company which develops our school administration software. The email advised us that the company was planning to migrate their backend email delivery provider from Mandrill to SparkPost. We were advised that if we wanted to keep mail delivery free we’d sign up for an account with SparkPost. The email was poorly worded, as both my colleague and I assumed that the change over was going to be happening in a matter of days. Since our school sends out tons of email via the admin package, I acted quickly and got us signed up for a free account.

After getting signed up, I got the company to switch the backend provider on our account over to SparkPost, which worked correctly. I was advised to set up SPF and DKIM records in our DNS zone so that mail sent via SparkPost would be far less likely to be rejected as SPAM. It took me a bit of research on the correct way to set up these records, most especially the SPF record. We have mail coming from our domain from both SparkPost and our MX records, so both need to be covered. A catch is that your SPF record cannot require more than 10 DNS lookups or it would be not be considered a valid record. It took me a bit of fiddling to find the right balance, but I got it done eventually. As a bonus, the SPF record should help get mail delivered to Gmail recipients quicker – we’ve often have long delays in mail getting delivered to Gmail in the past probably due to the lack of the SPF record.

Once term started and users started sending mail, some problems came to light, namely that a lot of mail was simply being rejected as SPAM and that pulling out the list of automatically suppressed email addresses was impossible via the web interface. The SPAM problem comes from the fact that some of the IP addresses used in SparkPost’s free tier pool have been tainted by other users. Since we have no control over which server sends the mail, it’s a crap shoot in which mail gets through and what is blocked as SPAM. One solution is to upgrade to a paid tier and buy dedicated IP addresses, but this was not something we had budgeted for and as such isn’t a viable option just yet.

Contacting their support, I asked for help. I got a reply that apologised, told me that they were terminating accounts for SPAMMING and that they had made some change that would hopefully help our account. Time will tell if that really is the case. We cannot afford to regularly have 20 odd % of our mail routinely fail to deliver because it’s identified as SPAM due to a tainted IP address.

Getting the suppression list was a challenge. I found a command on their blog which would pull it out of a command line using cURL, a Unix tool. This displays a raw bit of JSON code on the command line which includes all the suppressed emails and reasons why it was suppressed. It took me quite some time to figure out that I could echo this output using the > command to a text file with the entire output of the command. Then I needed to get this processed into something I could use, preferably a CSV file for import into Excel. Thankfully I found an website that does just that – website here. Armed with the now useful CSV file, I imported into Excel and made a spreadsheet for our registrar to follow up with the relevant parents so that we can get correct email addresses.

This whole adventure with SparkPost has taught me quite a bit about email out there on the internet, especially when you operate on a bulk scale. It’s also taught me that the spammers have really ruined email as a communication tool. I struggle to explain to staff in plain English why exactly their email isn’t getting delivered, as the concepts are not straight forward for people who don’t have the faintest clue of how email delivery actually works.

Still, SparkPost should be useful in the long run, especially if they get their tainted IP problem sorted out. I have more insight now into the process than I did when Mandrill was the backend delivery tool. I get the feeling that at this point in time, SparkPost is still very much a programmer’s tool rather than something that is geared towards end users. Hopefully in time SparkPost will make their website more user friendly and capable, which will greatly elevate the service I think, especially for a non-programmer like myself who simply needs to get something done.

Categories: Software Tags:

Those WTF moments

October 14, 2016 Leave a comment

Sometimes in the world of IT, you have moments where all you can do is scratch your head and ask WTF happened. Such was the case on Monday this week, before I even got back into work and before the term started. I received a text from my head of IT who said that he was unable to access one of our (virtual) servers to post our PowerPoint daily notice we show our learners. In-between getting dressed and packing my bags for work, I remoted in to take a look.

I couldn’t see the server on the network nor could I Remote Desktop in to it. Ping worked surprisingly, but that seemed to be about it. Going to Hyper-V manager revealed the server was on and I could connect via the console. I picked up a clue as what to what could be wrong when I noticed that the Heartbeat status wasn’t being reported to Hyper-V Manager. This indicated that the service had stopped running for some reason.

The previous Friday I had rebooted all our servers in order to finish their update cycles, as well as to prepare them for the term ahead. This particular server had come up from the reboot ok, so I didn’t do an in depth check. It’s never given me an issue like what happened before, so I made the mistake of assuming all was well. Anyway, after connecting, I could see that all the Hyper-V services were not running inside the VM. Manually trying to start them didn’t work. I tried to upgrade the Integration Components since Hyper-V indicated that my other VM’s needed an update for the components. No matter how I ran the setup file, it would not execute on the sick VM. By this time I had to leave to get to work, so the problem had to wait until I got in.

After arriving at work and settling in, I cloned the VM to my PC so I could play around more easily. Numerous attempts at a cure all failed, until I came across a post on the internet that described the same symptoms as I had. There was a link to a Microsoft KB article, which included steps on how to fix the problem. The KB dated from a few years back, so I found it incredibly bizarre that the problem only hit us now. Still, the sick server is running Server 2008, so I went ahead and made the change in the registry as documented. A reboot later and the server on my PC was suddenly working normally again. All relevant services were starting up correctly again and the server was back in action.

Since it was successful on my local cloned image, I went ahead and made the same change on the sick VM itself. Sure enough, one reboot later and we were back in business. In the aftermath, I spent a lot of time trying to figure out what caused this issue. While I did have IIS installed on the server years ago, I don’t recall there ever being a SSL certificate on that server. How exactly we ended up with the situation is probably something I’ll never fully know. As I said to my colleague, we’ve both seen random stuff over the years, but this one was really a WTF moment in a big way.

Categories: General, Software

Salvaging old equipment a.k.a dumpster diving

Last week I watched a couple of videos on YouTube where old computers were rescued from the kerb or dumpsite and refurbished for use. This saves on e-waste and also provides cheap computers to those who cannot afford a new machine. This got me thinking about all the equipment I have discarded, sold or donated while at my school, as well as the actual value of refurbishing old equipment.

As time has gone on, I estimate I’ve gotten rid of over 100 old computers, ± 40 projectors, ± 20 printers and countless individual parts such as dead hard drives, power supplies, motherboards etc. Some of this went into the the trash, while others were donated or sold off to raise some funds for the school. In fact, we cleaned up 6 computers for sale over the first week of holidays. However, the process is time consuming, especially with old equipment like that. The process goes something like this:

  • Physically inspect the chassis to look for loose panels, missing screws, worn/sticky buttons etc.
  • Open the chassis and blow out all the dust using our air compressor, then perform a visual inspection of the motherboard, looking for swollen/blown capacitors, loose cable connections etc.
  • Power on the PC and listen for fans that need lubrication. Most often this is the power supply, graphics card or the chassis fan. Fans that grind are a sure sign of that fan seizing up completely in the not too distant future.
  • Perform lubrication on the fans that require it, which means removing the part from the PC to get to the fan lubrication cover.
  • Install as much RAM as possible as well as a working DVD drive if required.
  • Wipe the hard drive and install Linux Mint/FreeDOS as a free operating system, as we cannot sell the computers with Windows on them.
  • Leave the PC running for a while to determine minimum stability.

This leaves us with a working PC, but it is time consuming, even if it only needs minimal checks and a dust blow out.

It made me think about how far back one can and should go with refurbishing old PC’s. While there are plenty of Pentium 4 and Pentium D based computers out there, they have the disadvantage of running very hot, using a lot of electricity and in the P4’s case, are single threaded chips. Coupled with IDE or SATA1 speed hard drives and the computer is unpleasant to use, even with a freshly installed operating system. Again, while this will provide a computer to a charity or needy person who has never had one before, the economics of using such an old machine weighs heavily against it.

Printers are easier, in the sense that they generally just need a new toner or ink cartridge(s). The problem with older devices though are if they are using the now defunct Paralell Port, or as HP loves to do, not provide drivers for modern versions of Windows. I had to replace all our old HP Laserjet 1018’s in the school because they flat our refused to run stably under Windows 7. I’ve got a 4 colour laser MFP in the office that I have to discard, as the device will not behave properly under anything newer than Windows Vista at best. HP have not put out modern usable drivers for this machine, instead reccomending that you buy a modern, supported printer. This to me is a tragedy, as the device has less than 8000 pages on the counter. There is nothing physically wrong with the machine, but unless we run it on an old version of Windows, it’s become little more than a glorified door stop.

Projectors have the problem of either having their lamps require replacing, colour wheel dies (DLP projectors only) or there’s a problem with the internal LCD panels on LCD models. When you ask for a quote on repair or a new lamp, it actually becomes more cost effective to buy a new projector rather than repair the existing one. Not to mention, most older projectors won’t have modern ports like HDMI or network ports on them, so they are less useful in today’s changing world.

In the end, this is all part of the vicious cycle of technological progress. Unless we can somehow convince manufacturers to better support their products, we are going to be locked into producing tons of e-waste. Reusing old computers is a good start, but there also comes a point where it is no longer viable to use older equipment. One thing that could definitely be improved is much more visibility for e-waste recyclers. Equipment can be properly stripped and salvaged by these firms, who then get the components properly recycled and also avoid polluting areas with toxic chemicals that leech out of electronics as they decompose. It would also help if more people took an interest in repairing their own stuff if it breaks, rather than just throwing it away. There’s a thrill that comes from fixing something with your own hands, a thrill that more people should want to experience.

UEFI booting observations

It’s exam time at my school, which means that things quieten down a bit during said period. This leaves me with some free time during the day to experiment and learn new things, or attempt to do things I have long wanted to do but have not had the time. I’ve used the time during this last week to play around with deploying Windows 7 and 10 to PC’s for the purpose of testing their UEFI capabilities. While Windows 7 can and does UEFI boot, it really doesn’t gain any benefits over BIOS booting, unlike Windows 8 and above. I was more interested in testing out the capabilities of these motherboards, so I could get a clearer idea of hardware issues we may have when we move to Windows 10.

Our network comprises of only Intel based PC’s, so all my experiences so far are based off of that particular point. What I’ve found so far boils down to this:

  • Older generation Intel 5 & 6 chipset series motherboards from Intel themselves are UEFI based, but present interfaces that look very much like the traditional console type BIOS. The only real clue is that under the Boot section, there is an option to turn on UEFI booting.
  • These older motherboards don’t support Secure Boot or the ability to toggle on and off the Compatibility Support Module (CSM) – the UEFI version on these boards predate these functions.
  • I have been unable to to UEFI PXE network boot the 6 series motherboard, haven’t yet tried the 5 series boards. While I can UEFI boot the 6 series to a flash drive/DVD/hard drive, I cannot do so over the network. Selecting the network boot option boots the PC into what is essentially BIOS compatibility mode.
  • The Intel DB75EN motherboard has a graphical UEFI, supports Secure Boot and can toggle the CSM on and off. Interestingly enough though, when the CSM is on, you cannot UEFI PXE boot – the system boots into BIOS compatibility mode. You can only UEFI PXE boot when the CSM is off. This is easy to tell as the network booting interface looks quite different between CSM and UEFI modes.
  • Windows 7 needs the CSM mode turned on for the DB75EN motherboards if you deploy in UEFI mode, so that it can boot, at least from what I’ve found from using PXE boot. If you don’t turn CSM on, the boot will either hang at the Windows logo or will moan about unable to access the correct boot files. I have yet to try and install Windows 7 on these boards from a flash drive in UEFI mode to see what happens in that particular scenario.
  • I haven’t yet had a chance to play with the few Gigabyte branded Intel 8 series motherboards we have. These use Realtek network cards instead of Intel NICs. I’m not a huge fan of Gigabyte’s graphical UEFI, as I find it cluttered and there’s a lot of mouse lag. I haven’t tested a very modern Gigabyte board though, so perhaps they’ve improved their UEFI by now.

UEFI Secure Boot requires that all hardware supports pure UEFI mode and that the CSM be turned off. I can do this with the boards where I’m using the built in Intel Graphics, as these fully support both CSM mode and pure UEFI. Other PC’s with Geforce 610 adapters in them don’t support pure UEFI boot, so I am unable to use Secure Boot on them, which is somewhat annoying, as Secure Boot is good for security purposes. I am probably going to need to start making use of low end Geforce 700 series cards, as these support full UEFI mode, so will support Secure Boot as well.

It’s been a while since we bought brand new computers, but I will have to be more picky when choosing the motherboards. Intel is out of the motherboard game and I am not a fan of Realtek network cards either – this does narrow my choices quite a bit, especially as I also have to be budget conscious. At least I know that future boards will be a lot better behaved with their UEFI, as all vendors have had many years now to adjust to the new and modern way of doing things.

Keeping Adobe Flash Player updated on a network

The Adobe Flash Player plugin is a pain in the arse. It’s a security nightmare, with more holes in the codebase than Swiss cheese. It seems every other week Flash makes the headlines when some or another security vulnerability is discovered and exploited. Cue the groans from network admins and users around the world as Flash has to be updated *yet* again. Unfortunately, one can’t quite get permanently rid of it just yet, as too many websites still rely on it. While you could get away with not using it at home, in a school where multiple people use a computer and visit different websites, one doesn’t have much choice really but to make sure Flash is installed.

On Windows 7 and below, the situation with Flash is a bit crazy. There’s a version for Internet Explorer (ActiveX plugin), a version for Firefox that is installed separately and Google Chrome bundles its own version – I’m not sure about smaller or niche browsers, but I think modern Opera inherits Flash via its relationship with Chrome’s engine. Thankfully with Windows 8 and above, Flash for Internet Explorer is distributed via Windows Update. It’s automatic and contains no 3rd party advertisements, anti-virus offers, browser bundling etc – all things Adobe have done in the past with their Flash installers. Trying to install Flash from Adobe’s website on Windows 8 and above will fail, which at least may help to kill off the fake Flash installer routine used by malware authors to trick unsuspecting users.

The usual method of installing Flash is highly cumbersome if you run a large network – not to mention that EXE files are much less flexible than MSI files for deployment and silent install options. Thankfully Adobe do make Flash Player in MSI format, but it’s not easy to get hold of directly. You have to sign a free enterprise deployment license to be able to legally distribute Flash and Reader in your organisation. The problem becomes how to distribute the updates especially if you aren’t running System Center or another product like that. Enter WSUS Package Publisher, indispensable if you make use of WSUS on your network.

WPP allows you to use the enterprise update catalogs Adobe and some other vendors offer. Using this, you essentially push the updates into your existing WSUS infrastructure, where it ends up delivered to the client computers like any other update. One thing you need to do is tweak the update as you publish it, so that it isn’t applicable to computers running Windows 8 upwards – if you don’t do this, the update will download on newer Windows versions, but will fail to install repeatedly and will need to be hidden. The other thing I’ve also discovered that needs to be fixed is that the silent install command line switch needs to be deleted. When a MSI file is delivered via WSUS, it is automatically installed silently. I discovered this the hard way, since one of the Flash updates I imported was failing to install on every computer. Turning on MSI logging and searching for the error code eventually lead me to discovering what was wrong, after which I corrected the problem and now know what to do with every new update that comes out for Flash.

Since using WPP, I’ve felt happier about the safety of my network, as I can usually get Flash pushed out with 2-3 days of the initial download. This is far better than having to visit each computer manually and keeping Flash up to date that way!

Curating digital photos

The rise of digital photography over the last 15 or so years has had many side effects. The most obvious is that analog film cameras have largely, though not completely vanished. No longer did you have to pay and wait for processing, hoping that your photos came out, or that you loaded the correct speed film. Digital gave you instant feedback either on the camera or once the files were transferred onto your computer.

My school is 59 years old as of this post. We have an archive of digital photos reaching back to the year 2000. The previous 43 years of school history is on film, but sadly most of whatever was taken was either lost or destroyed as the years went by. It seems there wasn’t much effort to archive and protect the slides and negatives at the time. While there are some slides and negatives, a large portion of the school’s history may be irredeemably lost. This is great pity, as what I have found, scanned and posted online has brought many happy memories back to people, most of whom may never have seen those photos.

Digital photos are far easier to store, backup and replicate to more than one location, which gives a huge amount of protection over analog slides and negatives. However, with the increase in megapixels and sensor quality over the years, combined with larger memory cards has lead to an unforeseen consequence: we have exponentially more photos now than we have ever had before. It’s so easy to take hundreds of shots of events now compared to the days of film when you were limited by how many spools you had and how much you were prepared to pay for developing and printing. Not only that, but since digital is now so ubiquitous, more people can contribute photos than ever before.

To give an example of this point: imagine a school sports day. Lots of activities all over the show that one photographer can’t cover on their own. Now imagine that there’s 5-10 students taking photos as well, covering all areas. Say each person takes 250 photos and suddenly you can end up with a total of 1500-2750 photos from one event – and that’s using a conservative figure. Obviously not all of these photos are going to be useful, which is where the time consuming art of weeding out the bad photos comes in. Most amateur student photographers I’ve spoken to never take the time to actually curate their photos. In fact, most staff members who have taken photos of school events haven’t done so either. It’s too easy to simply dump the whole contents of a memory card into a folder on the server and leave it there. This is what has happened with our digital archives over the years, to the point where we have something like 138000 files taking up over 480GB of space on our photos network share.

That number was a lot higher before I decided to take on the task of curating and cleaning up the mess the share had become. Not all of the files on the drive were photos, as I’ve deleted a number of Photoshop PSD’s, PowerPoint presentations, AVI and MP4 movie clips and other odds and ends. I’ve also deleted a huge amount of duplicates. Last week I brought home a fresh copy of all the files in the drive and imported it into Adobe Lightroom. It took a long time, but Lightroom calculated there was something like 128000 odd photos. I’m not sure about the discrepancy between that figure and what Windows Explorer tells me, but I think there may have been more duplicates that Lightroom ignored on import.

Now with the power of Lightroom, I’ve been able to really start going through some of the albums. I’ve curated 5 sub folders now, rejecting and deleting up to half of the photos in each case. Factors I look for when deleting photos include the following:

  • Focus. My most important metric really. 99% of the time, I’m going to delete out of focus photos.
  • Resolution. Photos of 640×480 or smaller are of no real use to us, even as an archive. I made the call to delete these, even if they are the only record of an event.
  • Motion blur. Too much of this ruins the photo. This usually occurs because shutter speed is too slow and it leads to a strange looking photo.
  • Framing. Things like cut off heads, people too distant, people partially in the edges of photos and so forth usually end up being binned.
  • Damaged files. Caused by bit-rot or due the camera/memory card being faulty, these are tossed.
  • Noise. Too much digital noise due to high ISO speeds or older sensors lead to very unpleasant, soft and noisy photos. I rescue where I can, but otherwise these too are binned.
  • RAW files. RAW files are fantastic for many things, but as part of an archive they are problematic. Every camera manufacturer has their own RAW format, which doesn’t always open well in 3rd party software. The alternative DNG format as created by Adobe is an option, but unless you take extra steps, they aren’t easily viewable. By contrast, JPEG files are universal and can be opened on just about any platform in existence.
  • Severe over or under exposure. Files that are extremely exposed in either direction are usually useless, especially if they are in JPEG form right out the camera.
  • Too similar photos. When you take photos in burst mode, you’ll often end up with many photos that are near identical, often only with small variations between frames. I usually pick the best 1 or 2 of the lot and delete the rest. This is especially true in sports/action shots.

I still have an incredibly long way to go. I’ve deleted well over 20000 files by now, but a mountain is still in front of me. Of course, as 2016 goes on and more photos get added to the 2016 archive, that mountain is only going to grow. Still, I’ve made a start and I am happy with what I’ve done so far. Thanks to the process, I’ve been able to upload many albums of decent, usable photos to our Facebook pages so that pupils, parents and staff can view, tag, share and download them.

In closing, I would suggest that any person who enjoys their photo collection to take the time to properly curate said collection. It isn’t always easy to delete photos, especially if they are the only one of a special event/person. However, unless one learns to be decisive, the collection is just going to eventually grow to the point of overwhelming you. Take time to savour the quality, not the quantity.