Archive

Author Archive

The slow migration to Office 365

When it comes to corporate mail servers, many would argue that Microsoft Exchange is the king of the hill. It’s a behemoth of a product that powers so many offices around the world providing vital features. If set up correctly, Exchange has been one of those products that in my experience just hums along quietly, doing its job without demanding a lot of attention.

At the end of 2009 when my first colleague and I migrated my school’s network, Exchange 2007 was our mail server of choice. Not only would it provide everything we needed, it would offer many new features to a school that was used to using a very broken Pegasus Mail/Mercury mail server/Novell Netware combination. It also helped that we got Exchange free of charge under the national SA Government agreement with Microsoft, which ended about 6 months after we installed Exchange.

Since that time, this Exchange server has processed millions of emails and survived moving from a decrepit physical machine to a XenServer implementation to finally ending up in Hyper-V. I’ve probably had 10 incidents or less with this server over the last 7 years. External internet connectivity or issues with upstream mail servers not withstanding, our mail server has done its job perfectly.

Like all things in technology however, there comes a time to move on. The web interface of Exchange 2007 has become really dated and leaves you tied to Internet Explorer for the best results. Those were the days before Microsoft became cross browser friendly, where the “lite” version of webmail was seriously crippled. While I would have upgraded us internally to a later version, we couldn’t afford an upgrade. After the government agreement expired, we were stuck. Quotes I received for updates versions made my eyes water and no one could quite work out how to price software for schools. I think every reseller I contacted only knew how to deal with the corporate world.

In the intervening years, Microsoft essentially resolved my dilemma by introducing and refining Office 365 for Education. Originally billed as Live@Edu, the product provided some nice perks – 50GB mailbox, huge (then called) Skydrive storage etc. The problem was that the product lacked unity and cohesiveness at the time. Live@Edu folded into Office 365 and things have only gone up and up since then. For no cost to us, we get access to the latest version of Exchange, albeit Exchange online, 50GB mailbox, superior spam filtering, access to Microsoft Teams and all the other applications available for education users. As long as their is competition with Google’s G-Suite for Education, we all stand to benefit from that rivalry which forces Microsoft to up their game.

Towards the end of 2015, I decided to migrate all my student mailboxes over to the cloud since students had miniscule amounts of mail compared to staff. It got their mailboxes offsite and gave me some valuable experience on how the migration process would work. It took some reading up on how to do it, but the process is something like this:

  • Sync your on site Active Directory to 365 with the new Sync Tool. The new tool is far better than the old version and what was possible in the earlier days.
  • If you want to be able to simply connect to on premises Exchange and migrate the mailbox like that, you need to have a working Outlook Web Access instance running, secured by a SSL certificate. This lets 365 sync the selected users mailbox to the cloud.
  • The process takes a while, especially over slow connections. The faster internet speeds you have, especially upload speed, the better.
  • You are limited to either a cutover or staged migration for Exchange 2007. Cutover is defined as moving everyone at once then changing DNS MX records so that mail flows directly to 365. Staged is slower, where you move some mailboxes at a time and still use the onsite server as the engine for routing mail. There’s slightly more work with staged, but it lets you be methodical and careful.
  • You can upload Outlook PST files as another method of moving mailboxes, but it’s the same issue as an online migration – you need good uploading speed.

This year I started moving staff mailboxes over for the first time. I had only planned to start once our fibre optic internet connection was in, but the unexpected delays in getting our line in has pushed me to start now already, even over our horrible ADSL connection. I’ve now synced about 10 staff mailboxes over and given staff a manual on how to use the new interface. Some are familiar with it already having had access via their universities or other institutes. The real problem is identifying users who can adapt to the new interface and give feedback on the manual. This is easier said than done when you still have some staff who can barely work with the existing system, 7 years after it went online…

Eventually my goal is to have moved all mailboxes over the cloud, with not one email having been misplaced during the journey. Once that is done, I intend to decommission my on site Exchange server, as well as the actual Windows VM it’s running in. It will be good to not have to support Server 2008 as well, one less old OS to worry about.

In short, there’s precious little reason to have an onsite Exchange server anymore if your internet connection is fast enough. Microsoft does a better job of server uptime than what we can do on our own, they have better spam filtering and they provide a package of products that is not only compelling, but free for education as well. The only real reason to have onsite Exchange anymore is because of privacy or regulatory concerns or if you need some sort of feature that Exchange Online can’t provide.

Upgrading Windows 10 via WSUS

November 15, 2016 Leave a comment

Windows 10 is supposedly the “last” consumer Windows edition Microsoft will release. While the version will stay as 10, over time the whole OS will mature, grow and mutate into something that will look and feel very different from the original release of July 2015. One side effect of this is that in a corporate environment using WSUS, it becomes possible for new versions of Windows 10 to be deployed as an in place fully automatic upgrade, the same way any other patch or service pack is installed. I was curious to see how this worked, so I approved the Anniversary Update (also known as version 1607) for installation at work and let my PC download the update.

Sure enough, the process was the same as what my home PC went through when it upgraded to the 1607 update. A couple of update screens and quite some time later, I was back at my desktop, duly upgraded. Everything was still in place, bar the RSAT pack which had to be updated to a version compatible with v1607. Overall, an extremely smooth and hands free process, just time consuming. I imagine it would easily take twice or thrice as long if the machine runs on a mechanical HD and not a SSD.

That being said, there was one major problem with the 1607 update – checking for updates from WSUS broke due to a bug in 1607. Windows 10 1607 would start to search for updates from the configured WSUS server, only to have multiple services in the background crash repeatedly, with no indication to the user. To the end user, it simply looks like the search is stuck at 1% and never moves from there. Apparently, if one leaves the process running long enough, updates will eventually download. This is obviously an unacceptable bug and Microsoft were made aware of it. They promised a fix in one of the monthly update roll ups, which was subsequently delivered and verified as having fixed the problem. Now you have a chicken and egg situation: deploy the 1607 update via WSUS, but then struggle after that, since you need the Cumulative Update to fix the problem.

You could manually install the update, but this becomes unwieldy in a large organisation. If deploying Windows 10 via deployment tools, you could make sure that the base image has the update injected already, which prevents the issue from cropping up in the first place. Sadly, the 1607 update is delivered from WSUS as an encrypted ESD file. While it is possible to decrypt this and inject the update, I don’t know if it’s possible to convert that back in to an ESD file. Even if you could, the checksum wouldn’t be valid and WSUS would probably fail to work with the modified file.

There’s always a possibility Microsoft could revise the 1607 update in WSUS so that the ESD file comes with the last Cumulative Update installed so that it works correctly out the box. I recall something like this happening with the November 1511 update, which I declined as it was another 3-4 GB download. Unfortunately, one doesn’t know when or even if this will happen. It’s also possible the problem will never be fixed. With the Creators Update due out early 2017 (March?) it’s possible that Microsoft uses that as the new baseline. If I’m correct, once the Creators Update is approved in the WSUS console, it will supersede the Anniversary Update, so the problem should be solved by bypassing the 1607 update.

I look forward to eventually rolling out Windows versions like this, though I think it will be beneficial if every computer had a SSD inside it first. Mechanical hard drives really do slow things down these days. A nice side effect of this is that Windows shouldn’t end up suffering from “Windows rot” as the Windows directory is replaced with each major upgrade. This should keep performance up compared to something like Windows 7 that gets bogged down after years worth of updates. Interesting times ahead…

Categories: Software Tags: ,

SparkPost

November 6, 2016 Leave a comment

About a month ago, I received an email at work from the company which develops our school administration software. The email advised us that the company was planning to migrate their backend email delivery provider from Mandrill to SparkPost. We were advised that if we wanted to keep mail delivery free we’d sign up for an account with SparkPost. The email was poorly worded, as both my colleague and I assumed that the change over was going to be happening in a matter of days. Since our school sends out tons of email via the admin package, I acted quickly and got us signed up for a free account.

After getting signed up, I got the company to switch the backend provider on our account over to SparkPost, which worked correctly. I was advised to set up SPF and DKIM records in our DNS zone so that mail sent via SparkPost would be far less likely to be rejected as SPAM. It took me a bit of research on the correct way to set up these records, most especially the SPF record. We have mail coming from our domain from both SparkPost and our MX records, so both need to be covered. A catch is that your SPF record cannot require more than 10 DNS lookups or it would be not be considered a valid record. It took me a bit of fiddling to find the right balance, but I got it done eventually. As a bonus, the SPF record should help get mail delivered to Gmail recipients quicker – we’ve often have long delays in mail getting delivered to Gmail in the past probably due to the lack of the SPF record.

Once term started and users started sending mail, some problems came to light, namely that a lot of mail was simply being rejected as SPAM and that pulling out the list of automatically suppressed email addresses was impossible via the web interface. The SPAM problem comes from the fact that some of the IP addresses used in SparkPost’s free tier pool have been tainted by other users. Since we have no control over which server sends the mail, it’s a crap shoot in which mail gets through and what is blocked as SPAM. One solution is to upgrade to a paid tier and buy dedicated IP addresses, but this was not something we had budgeted for and as such isn’t a viable option just yet.

Contacting their support, I asked for help. I got a reply that apologised, told me that they were terminating accounts for SPAMMING and that they had made some change that would hopefully help our account. Time will tell if that really is the case. We cannot afford to regularly have 20 odd % of our mail routinely fail to deliver because it’s identified as SPAM due to a tainted IP address.

Getting the suppression list was a challenge. I found a command on their blog which would pull it out of a command line using cURL, a Unix tool. This displays a raw bit of JSON code on the command line which includes all the suppressed emails and reasons why it was suppressed. It took me quite some time to figure out that I could echo this output using the > command to a text file with the entire output of the command. Then I needed to get this processed into something I could use, preferably a CSV file for import into Excel. Thankfully I found an website that does just that – website here. Armed with the now useful CSV file, I imported into Excel and made a spreadsheet for our registrar to follow up with the relevant parents so that we can get correct email addresses.

This whole adventure with SparkPost has taught me quite a bit about email out there on the internet, especially when you operate on a bulk scale. It’s also taught me that the spammers have really ruined email as a communication tool. I struggle to explain to staff in plain English why exactly their email isn’t getting delivered, as the concepts are not straight forward for people who don’t have the faintest clue of how email delivery actually works.

Still, SparkPost should be useful in the long run, especially if they get their tainted IP problem sorted out. I have more insight now into the process than I did when Mandrill was the backend delivery tool. I get the feeling that at this point in time, SparkPost is still very much a programmer’s tool rather than something that is geared towards end users. Hopefully in time SparkPost will make their website more user friendly and capable, which will greatly elevate the service I think, especially for a non-programmer like myself who simply needs to get something done.

Categories: Software Tags:

Those WTF moments

October 14, 2016 Leave a comment

Sometimes in the world of IT, you have moments where all you can do is scratch your head and ask WTF happened. Such was the case on Monday this week, before I even got back into work and before the term started. I received a text from my head of IT who said that he was unable to access one of our (virtual) servers to post our PowerPoint daily notice we show our learners. In-between getting dressed and packing my bags for work, I remoted in to take a look.

I couldn’t see the server on the network nor could I Remote Desktop in to it. Ping worked surprisingly, but that seemed to be about it. Going to Hyper-V manager revealed the server was on and I could connect via the console. I picked up a clue as what to what could be wrong when I noticed that the Heartbeat status wasn’t being reported to Hyper-V Manager. This indicated that the service had stopped running for some reason.

The previous Friday I had rebooted all our servers in order to finish their update cycles, as well as to prepare them for the term ahead. This particular server had come up from the reboot ok, so I didn’t do an in depth check. It’s never given me an issue like what happened before, so I made the mistake of assuming all was well. Anyway, after connecting, I could see that all the Hyper-V services were not running inside the VM. Manually trying to start them didn’t work. I tried to upgrade the Integration Components since Hyper-V indicated that my other VM’s needed an update for the components. No matter how I ran the setup file, it would not execute on the sick VM. By this time I had to leave to get to work, so the problem had to wait until I got in.

After arriving at work and settling in, I cloned the VM to my PC so I could play around more easily. Numerous attempts at a cure all failed, until I came across a post on the internet that described the same symptoms as I had. There was a link to a Microsoft KB article, which included steps on how to fix the problem. The KB dated from a few years back, so I found it incredibly bizarre that the problem only hit us now. Still, the sick server is running Server 2008, so I went ahead and made the change in the registry as documented. A reboot later and the server on my PC was suddenly working normally again. All relevant services were starting up correctly again and the server was back in action.

Since it was successful on my local cloned image, I went ahead and made the same change on the sick VM itself. Sure enough, one reboot later and we were back in business. In the aftermath, I spent a lot of time trying to figure out what caused this issue. While I did have IIS installed on the server years ago, I don’t recall there ever being a SSL certificate on that server. How exactly we ended up with the situation is probably something I’ll never fully know. As I said to my colleague, we’ve both seen random stuff over the years, but this one was really a WTF moment in a big way.

Categories: General, Software

Salvaging old equipment a.k.a dumpster diving

Last week I watched a couple of videos on YouTube where old computers were rescued from the kerb or dumpsite and refurbished for use. This saves on e-waste and also provides cheap computers to those who cannot afford a new machine. This got me thinking about all the equipment I have discarded, sold or donated while at my school, as well as the actual value of refurbishing old equipment.

As time has gone on, I estimate I’ve gotten rid of over 100 old computers, ± 40 projectors, ± 20 printers and countless individual parts such as dead hard drives, power supplies, motherboards etc. Some of this went into the the trash, while others were donated or sold off to raise some funds for the school. In fact, we cleaned up 6 computers for sale over the first week of holidays. However, the process is time consuming, especially with old equipment like that. The process goes something like this:

  • Physically inspect the chassis to look for loose panels, missing screws, worn/sticky buttons etc.
  • Open the chassis and blow out all the dust using our air compressor, then perform a visual inspection of the motherboard, looking for swollen/blown capacitors, loose cable connections etc.
  • Power on the PC and listen for fans that need lubrication. Most often this is the power supply, graphics card or the chassis fan. Fans that grind are a sure sign of that fan seizing up completely in the not too distant future.
  • Perform lubrication on the fans that require it, which means removing the part from the PC to get to the fan lubrication cover.
  • Install as much RAM as possible as well as a working DVD drive if required.
  • Wipe the hard drive and install Linux Mint/FreeDOS as a free operating system, as we cannot sell the computers with Windows on them.
  • Leave the PC running for a while to determine minimum stability.

This leaves us with a working PC, but it is time consuming, even if it only needs minimal checks and a dust blow out.

It made me think about how far back one can and should go with refurbishing old PC’s. While there are plenty of Pentium 4 and Pentium D based computers out there, they have the disadvantage of running very hot, using a lot of electricity and in the P4’s case, are single threaded chips. Coupled with IDE or SATA1 speed hard drives and the computer is unpleasant to use, even with a freshly installed operating system. Again, while this will provide a computer to a charity or needy person who has never had one before, the economics of using such an old machine weighs heavily against it.

Printers are easier, in the sense that they generally just need a new toner or ink cartridge(s). The problem with older devices though are if they are using the now defunct Paralell Port, or as HP loves to do, not provide drivers for modern versions of Windows. I had to replace all our old HP Laserjet 1018’s in the school because they flat our refused to run stably under Windows 7. I’ve got a 4 colour laser MFP in the office that I have to discard, as the device will not behave properly under anything newer than Windows Vista at best. HP have not put out modern usable drivers for this machine, instead reccomending that you buy a modern, supported printer. This to me is a tragedy, as the device has less than 8000 pages on the counter. There is nothing physically wrong with the machine, but unless we run it on an old version of Windows, it’s become little more than a glorified door stop.

Projectors have the problem of either having their lamps require replacing, colour wheel dies (DLP projectors only) or there’s a problem with the internal LCD panels on LCD models. When you ask for a quote on repair or a new lamp, it actually becomes more cost effective to buy a new projector rather than repair the existing one. Not to mention, most older projectors won’t have modern ports like HDMI or network ports on them, so they are less useful in today’s changing world.

In the end, this is all part of the vicious cycle of technological progress. Unless we can somehow convince manufacturers to better support their products, we are going to be locked into producing tons of e-waste. Reusing old computers is a good start, but there also comes a point where it is no longer viable to use older equipment. One thing that could definitely be improved is much more visibility for e-waste recyclers. Equipment can be properly stripped and salvaged by these firms, who then get the components properly recycled and also avoid polluting areas with toxic chemicals that leech out of electronics as they decompose. It would also help if more people took an interest in repairing their own stuff if it breaks, rather than just throwing it away. There’s a thrill that comes from fixing something with your own hands, a thrill that more people should want to experience.

UEFI booting observations

It’s exam time at my school, which means that things quieten down a bit during said period. This leaves me with some free time during the day to experiment and learn new things, or attempt to do things I have long wanted to do but have not had the time. I’ve used the time during this last week to play around with deploying Windows 7 and 10 to PC’s for the purpose of testing their UEFI capabilities. While Windows 7 can and does UEFI boot, it really doesn’t gain any benefits over BIOS booting, unlike Windows 8 and above. I was more interested in testing out the capabilities of these motherboards, so I could get a clearer idea of hardware issues we may have when we move to Windows 10.

Our network comprises of only Intel based PC’s, so all my experiences so far are based off of that particular point. What I’ve found so far boils down to this:

  • Older generation Intel 5 & 6 chipset series motherboards from Intel themselves are UEFI based, but present interfaces that look very much like the traditional console type BIOS. The only real clue is that under the Boot section, there is an option to turn on UEFI booting.
  • These older motherboards don’t support Secure Boot or the ability to toggle on and off the Compatibility Support Module (CSM) – the UEFI version on these boards predate these functions.
  • I have been unable to to UEFI PXE network boot the 6 series motherboard, haven’t yet tried the 5 series boards. While I can UEFI boot the 6 series to a flash drive/DVD/hard drive, I cannot do so over the network. Selecting the network boot option boots the PC into what is essentially BIOS compatibility mode.
  • The Intel DB75EN motherboard has a graphical UEFI, supports Secure Boot and can toggle the CSM on and off. Interestingly enough though, when the CSM is on, you cannot UEFI PXE boot – the system boots into BIOS compatibility mode. You can only UEFI PXE boot when the CSM is off. This is easy to tell as the network booting interface looks quite different between CSM and UEFI modes.
  • Windows 7 needs the CSM mode turned on for the DB75EN motherboards if you deploy in UEFI mode, so that it can boot, at least from what I’ve found from using PXE boot. If you don’t turn CSM on, the boot will either hang at the Windows logo or will moan about unable to access the correct boot files. I have yet to try and install Windows 7 on these boards from a flash drive in UEFI mode to see what happens in that particular scenario.
  • I haven’t yet had a chance to play with the few Gigabyte branded Intel 8 series motherboards we have. These use Realtek network cards instead of Intel NICs. I’m not a huge fan of Gigabyte’s graphical UEFI, as I find it cluttered and there’s a lot of mouse lag. I haven’t tested a very modern Gigabyte board though, so perhaps they’ve improved their UEFI by now.

UEFI Secure Boot requires that all hardware supports pure UEFI mode and that the CSM be turned off. I can do this with the boards where I’m using the built in Intel Graphics, as these fully support both CSM mode and pure UEFI. Other PC’s with Geforce 610 adapters in them don’t support pure UEFI boot, so I am unable to use Secure Boot on them, which is somewhat annoying, as Secure Boot is good for security purposes. I am probably going to need to start making use of low end Geforce 700 series cards, as these support full UEFI mode, so will support Secure Boot as well.

It’s been a while since we bought brand new computers, but I will have to be more picky when choosing the motherboards. Intel is out of the motherboard game and I am not a fan of Realtek network cards either – this does narrow my choices quite a bit, especially as I also have to be budget conscious. At least I know that future boards will be a lot better behaved with their UEFI, as all vendors have had many years now to adjust to the new and modern way of doing things.

Keeping Adobe Flash Player updated on a network

The Adobe Flash Player plugin is a pain in the arse. It’s a security nightmare, with more holes in the codebase than Swiss cheese. It seems every other week Flash makes the headlines when some or another security vulnerability is discovered and exploited. Cue the groans from network admins and users around the world as Flash has to be updated *yet* again. Unfortunately, one can’t quite get permanently rid of it just yet, as too many websites still rely on it. While you could get away with not using it at home, in a school where multiple people use a computer and visit different websites, one doesn’t have much choice really but to make sure Flash is installed.

On Windows 7 and below, the situation with Flash is a bit crazy. There’s a version for Internet Explorer (ActiveX plugin), a version for Firefox that is installed separately and Google Chrome bundles its own version – I’m not sure about smaller or niche browsers, but I think modern Opera inherits Flash via its relationship with Chrome’s engine. Thankfully with Windows 8 and above, Flash for Internet Explorer is distributed via Windows Update. It’s automatic and contains no 3rd party advertisements, anti-virus offers, browser bundling etc – all things Adobe have done in the past with their Flash installers. Trying to install Flash from Adobe’s website on Windows 8 and above will fail, which at least may help to kill off the fake Flash installer routine used by malware authors to trick unsuspecting users.

The usual method of installing Flash is highly cumbersome if you run a large network – not to mention that EXE files are much less flexible than MSI files for deployment and silent install options. Thankfully Adobe do make Flash Player in MSI format, but it’s not easy to get hold of directly. You have to sign a free enterprise deployment license to be able to legally distribute Flash and Reader in your organisation. The problem becomes how to distribute the updates especially if you aren’t running System Center or another product like that. Enter WSUS Package Publisher, indispensable if you make use of WSUS on your network.

WPP allows you to use the enterprise update catalogs Adobe and some other vendors offer. Using this, you essentially push the updates into your existing WSUS infrastructure, where it ends up delivered to the client computers like any other update. One thing you need to do is tweak the update as you publish it, so that it isn’t applicable to computers running Windows 8 upwards – if you don’t do this, the update will download on newer Windows versions, but will fail to install repeatedly and will need to be hidden. The other thing I’ve also discovered that needs to be fixed is that the silent install command line switch needs to be deleted. When a MSI file is delivered via WSUS, it is automatically installed silently. I discovered this the hard way, since one of the Flash updates I imported was failing to install on every computer. Turning on MSI logging and searching for the error code eventually lead me to discovering what was wrong, after which I corrected the problem and now know what to do with every new update that comes out for Flash.

Since using WPP, I’ve felt happier about the safety of my network, as I can usually get Flash pushed out with 2-3 days of the initial download. This is far better than having to visit each computer manually and keeping Flash up to date that way!