Author Archive

Going down the rabbit hole or 802.11r

If there’s one thing that will get under my skin and irritate me to no end, it’s a problem that I can’t figure out or fix, yet it seems I should be able to. Case in point is a situation I was in recently at work. Ever since implementing 802.1x authentication on our Wi-Fi this year, staff and students have been able to sign into the Wi-Fi without needing to know a network key or sign into captive portals or any other methods of connecting. WPA Enterprise was made for exactly that sort of situation, even though it does take quite a bit of work to get set up.

One staff member however could not connect to Wi-Fi no matter what we tried. We knew it wasn’t a username and password issue, but his Samsung Galaxy J1 Ace simply refused to connect. After entering credentials, the phone would simply sit on the Wi-Fi screen saying “Secured, Saved.” For the record, the J1 Ace is not that old a phone, but it came with Android 4.4.4 and has never been updated here in South Africa. Security updates also ceased quite a while ago. There is a ROM that I’ve seen out there for Android 5, but since I didn’t own the phone I didn’t want to take any chances with flashing firmware via Odin and mucking about with something that clearly was never supported here locally.

My first thought around the connectivity issue is that the phone had a buggy 802.1x implementation and couldn’t support PEAP or MSCHAPv2 properly. Connecting the phone to another SSID that didn’t use 802.1x worked fine, which indicated that the hardware was working at least. I gave up eventually and told the staff member that the phone was just too old to connect. He accepted this with good grace thankfully, but it was something that gnawed away at me, wondering what the issue was.

A few weeks ago, a student came to me wanting to connect to the Wi-Fi. Lo and behold, she had the exact same phone and we had the same situation as before. However, now I got frustrated and I was determined to find out what the problem was. Most of my internet searches came back with useless info, but somewhere somehow I came across an article that talked about how Fast BSS Transition had issues with certain phones and hardware. Fast BSS Transition is technically known as 802.11r and in a nutshell it helps devices to roam better on a network, especially in a corporate environment where you might be using a VOIP app that is especially sensitive to latency and delays while the device roams between access points. It’s been a standardised add-on to the Wi-Fi standards for a good few years, so a device from 2015 should have been just fine with supporting it!

Borrowing the staff member’s J1 Ace, I disabled the 802.11r and 802.11k options on the network. His phone connected faster than a speeding bullet it seemed! That adrenaline rush was quite pleasant, as I now finally knew what the issue was. I enabled 802.11k and the phone still behaved, which meant that the culprit was 802.11r. The moment that was enabled, the phone dropped off the network.

My solution to this problem was to clone the network in our Ruckus ZoneDirector, hide the SSID so that it’s not immediately visible and disable 802.11r for this specific SSID. Once completed, the teacher was connected and has been incredibly happy that he too could now enjoy the Wi-Fi connection again.

My theory is that some combination of wireless chip, chip drivers, Android version and potentially the KRACK fix on our network caused the J1 Ace to be unable to connect. It could be that while it does support 802.11r, when the KRACK fix has been installed on your network the phone cannot connect since it hasn’t been patched since before KRACK was revealed and now doesn’t know how to understand the wireless frames on the network post KRACK fix. Since the phone is never going to get any more support, the only answer is to run a network without 802.11r support for these kinds of devices.

It makes me angry that this kind of thing happens with older devices, this is also related to Android itself, but that is a topic for another post entirely.


The road to DMARC’s p=reject

DMARC is sadly one of the more underused tools out there on the internet right now. Built to work on top of the DKIM and SPF standards, DMARC can go a very long way to stopping phishing emails stone cold dead. While SPF tells servers if mail has been sent from a server you control or have authorised and DKIM signs the email using keys only you should have, DMARC tells servers what do when a mail fails either DKIM, SPF or both checks. Mail can be let through, quarantined to the Spam/Junk folder or outright rejected by the recipient server.

Since moving our school’s email over to Office 365 a year ago, I have had a DMARC record in place. I have had the record set to p=none, so that I could monitor the results over the course of time. I use a free account at DMARC Analyzer to check the results and have been keeping an eye on things over the last year. Confident that all mail flow is now working properly from our domain, I recently modified our DMARC record to read “p=reject;pct=5”. Now mail is being rejected 5% of the time if a destination server does checks on mail coming from our domain and the mail fails the SPF and DKIM checks. 5% is a good low starting point, since according to DMARC Analyzer, I have not had any mails completely fail DKIM or SPF checks in a long time. Some mail is being modified by some servers which does alter the alignment of the headers somewhat, but overall it still passes SPF and DKIM checks.

My next goal is to ramp up that 5% to 20% or 30%, before finally removing the pct variable completely and simply leaving the policy as p=reject. Not only will I be stopping any potential phishing incident arising from our school’s domain, I am also being a good net citizen in the fight against spammers.

Of course, this doesn’t help if a PC on my network gets infected and starts sending mail out via Office 365, as then the mail will pass SPF and DKIM checks and will have to rely on being filtered via the normal methods such as Bayesian filtering, anti-malware scans etc. That is the downside of SPF, DKIM and DMARC, they can’t prevent spam from being sent from inside a domain, so domains still need to be vigilant for malware infections, bots etc. At least with the policies in place, one avenue for spammers gets shutdown. As more and more domains come on board, spammers will continue to get squeezed, which is always a good thing.

Categories: Internet, Networking Tags: ,

Installing KB3000850 on Windows Server 2012 R2

I recently had cause to set up a new Windows Server 2012 R2 VM at work. As per usual for an operating system of this age, there were a lot of updates waiting once the server contacted WSUS. However, the process was a bit different to the past, mainly due to 2 huge updates to Windows Server 2012 R2 that either need a service stack update or the later huge update depends on the former to be installed.

It should be noted that my host server is 2012 R2 and that my VM is being served up by Hyper-V. The guest 2012 R2 VM is running as a Generation 2 VM and Secure Boot is enabled by default.

After the first round of updates, my server didn’t download any more updates. WSUS had expired the relevant service stack update that would enable the very important April 2014 update to install, which is needed for all 2012 R2 updates going forward. I installed a later servicing stack update manually, which then let the April 2014 and subsequent updates install – something like 155 of them. After that reboot, there were 5 patches left to go, one of them being KB3000850.

Unfortunately this is when the problems started. I would install the last batch of updates, only to have the server get to about 98 or 99% and then have the updates fail and then spend a lot of time reverting the updates. It was annoying and repeated attempts to install the patches kept failing. I left the server and went home, vowing to solve the issue the next day.

After viewing the Microsoft KB article on this patch, I suddenly recalled that I had the exact same problem on my existing VM’s a few years ago and that I had gotten around the problem in the end with an extremely easy, if time consuming trick. Simply shut down the VM, disable Secure Boot, install the patch and reboot, shut down the VM and re-enable Secure Boot. It takes a while, but eventually the patch installed cleanly and my server was finally up to date.

So far Microsoft’s Cumulative patching model seems to be working well enough to cut down on the number of individual patches going forward, but they haven’t yet ingested and added all the older patches into these Cumulative updates going back to the last baseline, which is the April 2014 mega patch. If they did this, the amount of patches being installed would drop dramatically and perhaps also increase patching speed. It would also certainly help clean out my WSUS installation!

Windows 10 on the Intel DBZ68 motherboard

January 27, 2018 Leave a comment

Yesterday I needed to install Windows 10 on a now 7 year old Intel DBZ68 motherboard, as the previous Windows 7 installation in the classroom was acting weirdly. Before I did that however, I tried to update the board’s firmware using the Windows based executable available on Intel’s site, as I’ve done with other Intel boards many times before. Unfortunately, a lot like the experiences I previously mentioned here, the firmware wouldn’t update using the Windows tool. I left it for later and proceeded to install Windows 10 on the machine.

Windows 10 itself runs quite nicely on this older board, helped no doubt by the SSD and Core i5 chip. The biggest snag I had was trying to use the onboard HD graphics to power a 3rd monitor. Windows 10 includes an “inbox” driver for 2nd and 3rd gen Intel Core CPU’s with integrated graphics, but Windows wouldn’t recognise and install the adapter for some reason. Trying to install the driver simply ended with the PC hard locking and needing a full power cycle to restore to working order. After futzing around for a while, I decided to do the firmware update, using a flash drive, Rufus and the last BIOS file on Intel’s site – you have to flash the firmware using IFLASH from FreeDOS, hence using Rufus to create a bootable USB drive with FreeDOS.

The 0014 BIOS that came on the board simply wouldn’t update via Windows and as linked in the previous article, it doesn’t let you use the F7 key during startup to do an update either. Thankfully the update went quickly and without any hitches. It seems the graphics adapter ROM was updated in BIOS 0027, changes which obviously carried over into the final 0043 firmware. Windows 10 needs the updated firmware for the adapter to be recognised correctly.

Back in Windows I simply had to select Update Driver on the unknown graphics adapter, choose from a list of devices already on the PC and let Windows find the best match. Seconds later the driver was installed and the 3rd monitor sprang to life.

The PC should continue to work for a few more years until eventually it gets replaced. The motherboard was a very nice one for its time, but lacks many modern useful features such as UEFI capable network boot, a graphical UEFI, UEFI Secure Boot etc. Still, it goes to show that with firmware updates and a little patience, even older hardware can still be relevant and useful in this day and age and doesn’t need to be carted off for recycling just yet. There’s plenty of power still left in Sandybridge era hardware, depending on your usage scenario of course.

Adding Office365 licenses to new users via PowerShell

December 12, 2017 Leave a comment

One of the tasks any school has to do each year is remove old students and add new ones. Using the built-in CSVDE tool, you can bulk import users into Active Directory very easily. Once there, they’ll get synchronised up to Office 365 (provided they are in an OU that is selected for sync) as new users. Good stuff! The only problem is that all those new users do not have licenses assigned to them in Office 365, which means they can’t use anything. You could manually assign a license to each user individually using the Office 365 website, but that will take hours, if not days if you have a huge number of students to license. Thankfully, there is a better way: PowerShell.

A very small script that is only 11 lines long will load usernames from a seperate CSV file and assign licenses to users based on that CSV file. Here is the script:

Import-Module MSOnline
$users = Import-Csv “C:\Users\Username\Desktop\2018.csv” -delimiter “,”
foreach ($user in $users)
Set-MsolUser -UserPrincipalName $upn -UsageLocation $usagelocation
Set-MsolUserLicense -UserPrincipalName $upn -AddLicenses $sku

Essentially, the script connects to Office 365 with your credentials (use an admin level account to connect). Change the location of the CSV file to your own location. The contents of the CSV file is simple, just 3 columns in total – column 1 is the User Principal Name of each student, column 2 is the 2 digit country code of your country and column 3 is the product license you want to assign to the student. Name the 1st cell in each column UserPrincipalName, UsageLocation and SKU respectively. You can find out what the exact license names for your Office 365 tenancy are by connecting to it as follows in PowerShell:

Import-Module MSOnline

You will end up with a list of license options for your tenancy, with a name along the lines of tenantname:STANDARDPACK. Copy and paste your desired license name into your CSV file for each user you want that license for.

Run the above script when you are happy with your CSV import file and if all goes well, a few minutes later all the users in Office 365 will have been correctly licensed.

You could get more complicated so that each license if configured with certain options disabled etc, but that involved extra complexity in your script. Keep it simple I reckon.

Flashing all the Firmwares

November 26, 2017 Leave a comment

In the not so distant past, updating an electronic device’s firmware was either impossible or carried a great many risks. In the slightly slower paced world back then, we didn’t complain too much, perhaps because devices shipped with by and large stable firmware that had spent lots of time in development and ended up being quite polished. In today’s break neck paced world, nothing is ever done and devices are often shipped as quickly as possible, with the promise to update the firmware and improve matters as time goes on.

For the manufacturers who adhere to this promise and regularly put out updated firmware, well done! You deserve big kudos for doing so. Sadly this state of affairs is more the exception than the rule. Far too often a device is shipped to market with initial firmware that gets updated maybe once or twice, only to be abandoned by the manufacturer who has moved onto the next bright and shiny gadget. The most obvious example of this is the mess that most Android based phones have gotten themselves into.

Sometimes firmware just operates low level hardware like the control board on a DVD burner for example. Other times it’s both that and a user interface/operating system all rolled into one – think of the web interface you use to control a home router for example. Sometimes the update just fixes bugs and adds stability, other times it does that and adds new features or updates the user interface – think updated PC UEFI or 3rd party router firmwares.

I promise there’s a point to all this rambling. The recent school holidays afforded me a chance to update firmware on a whole range of devices in my school. Network switches, ADSL routers, CCTV Cameras and the attached NVR as well as a few other odds and ends. HP deserves a special shout out here for their lengthy firmware life for older model switches. Whilst they had no reason to do, HP did keep updating the firmware of certain switches for a good number of years, which at least extended the useful life of these devices.

Sadly on the old 2610 series HP didn’t remove the Java based web interface, but the last available firmware did at least sign the binaries so that there’s one less warning when you connect. For the 2620 series, HP back ported the new UI from their modern Aruba switches, which has lead to a nice consistent interface across 3 different switch generations we own. If you’ve ever used HP’s legacy interface on the 2620 and other similar generation models, you’ll know how ugly and painful that interface was to use.

The Dahua CCTV system we used though was another story. For one thing, the fixed bullet camera we were sold appeared to have been very quickly replaced by Dahua, so there’s no new firmware beyond 2015. The fixed dome cameras did better however, with a firmware from only a few months ago. The NVR also had a much later firmware available. I flashed the NVR first, only for all hell to break out after the reboot. A large portion of the cameras refused to connect to the device after the update. Whilst most of the settings seemed to have been preserved during the update, too many little things seemed to have been disturbed. The next thing I did was to update the dome cameras one by one. When that still didn’t help, I deleted and re-added all the cameras to the NVR. To my relief, this sorted the problem and we were able to go back to using the system.

That being said, there has been some cases of the cameras displaying corrupted green screens, though that hasn’t lasted long and only seemed to be affecting 1-2 cameras. Those devices might just need to be flashed again for proper stability, but it’s still not how it’s supposed to be. Alternatively, I will check for the next available update and flash that to the cameras, hoping it solves the problem on those cameras with issues.

I still have my main server’s firmware to flash which I plan to do in the next school holidays. Intel has discontinued the S2600GZ system, but at least they also still make firmware available. That system is unlikely to get any more updates in the future, but at least it had a decent lifespan.

My suggestion when it comes to firmware updating is to flash everything you have with the latest available firmware, unless it is a completely critical core device that you cannot have any downtime or potential problems. Rather safe than sorry and sometimes an update is the only way to fix things. There’s also the option of 3rd party firmware on some devices, but that’s a whole different post.

DHCP Relay: the basics

November 26, 2017 Leave a comment

If you run a small flat network, DHCP just magically works once it is set up. Devices get their addresses, devices communicate, everything works and everyone is happy. The moment you partition the network with VLANS however, things change. Devices in the additional segment(s) no longer receive DHCP packets. There are 3 options available to rectify this issue:

  1. Manually configure static IP addresses. Painful but will work.
  2. Set up a DHCP server per additional VLAN. Lot of duplicated work and if you aren’t careful, DHCP packets can end up crossing VLANS, causing havoc with devices.
  3. Use DHCP relay to centralise IP address issuing from one central server.
  4. I’ve just recently configured DHCP relay at my school and it’s working well. Getting it set up is a tad tricky, but once you understand how it works, it’s quite straight forward. Here is a guide on how to do it on a network that runs Aruba switches and Windows 2012 R2 DHCP server.

    It should be noted that in order for this to work, you need a core switch that is capable of IP routing. Layer 3 switches will do this, as well some higher end Layer 2 switches from Aruba – the 2530 and 2540 models spring to mind. If you don’t have a routing capable switch in your network, you are going to need a router to be connected to each VLAN to do the job instead. Your VLANS must also be set up correctly with untagged and tagged ports for this to work.

Firstly, decide on the IP ranges you want for your additional VLANS. Try to ensure you have enough space so that you don’t need to redo the scope later on.

Next, create these scopes with all the necessary extra bits in the Windows DHCP management console, but do not activate them when asked at the end of the wizard. Leave them deactivated for the time being.

On your core Aruba switch, assign an IP address to every VLAN that you want to use DHCP relay on. Make sure that this IP matches the range of your DHCP server scope, but that the address doesn’t conflict with something in the range.

Next, enable IP routing on the core switch:

conf t
ip routing
wr mem

Next, add the IP helper address to each VLAN you want to use DHCP on. On the switch’s command line, type the following:

conf t (if starting from scratch, not needed if you are still carrying on from the above step)
vlan 20 ip helper-address
wr mem

Substitute VLAN 20 for each additional VLAN ID and for your DHCP server.

On each of your edge switches, do not give the switch an IP in any VLAN except your main or management VLAN that the core switch also resides in. Point each edge switch’s IP default gateway address to the core switch’s IP address.

On your Windows DHCP server, you will need to add some static routes to the server unless its default gateway is pointed to the core switch. Odds are that the server isn’t pointed to the core switch but rather to a firewall for internet access, so the routes will need to be added manually. Open up a command prompt and type the following:

route –p add mask

Repeat the above command for each VLAN you want DHCP on. Substitute with your own network, mask with the correct subnet mask and with your own core switch IP.

Lastly, activate the scope(s) in the Windows DHCP console. You can test things out by using a client PC in each VLAN and releasing and refreshing the IP address. You should be obtaining an address that is correct for each VLAN and there should be no spill over between the VLANS that will cause network chaos. You should be able to see the clients appearing in the Address Lease section of each DHCP scope.