Archive

Archive for the ‘My tips and tricks’ Category

Windows 10 on the Intel DBZ68 motherboard

January 27, 2018 Leave a comment

Yesterday I needed to install Windows 10 on a now 7 year old Intel DBZ68 motherboard, as the previous Windows 7 installation in the classroom was acting weirdly. Before I did that however, I tried to update the board’s firmware using the Windows based executable available on Intel’s site, as I’ve done with other Intel boards many times before. Unfortunately, a lot like the experiences I previously mentioned here, the firmware wouldn’t update using the Windows tool. I left it for later and proceeded to install Windows 10 on the machine.

Windows 10 itself runs quite nicely on this older board, helped no doubt by the SSD and Core i5 chip. The biggest snag I had was trying to use the onboard HD graphics to power a 3rd monitor. Windows 10 includes an “inbox” driver for 2nd and 3rd gen Intel Core CPU’s with integrated graphics, but Windows wouldn’t recognise and install the adapter for some reason. Trying to install the driver simply ended with the PC hard locking and needing a full power cycle to restore to working order. After futzing around for a while, I decided to do the firmware update, using a flash drive, Rufus and the last BIOS file on Intel’s site – you have to flash the firmware using IFLASH from FreeDOS, hence using Rufus to create a bootable USB drive with FreeDOS.

The 0014 BIOS that came on the board simply wouldn’t update via Windows and as linked in the previous article, it doesn’t let you use the F7 key during startup to do an update either. Thankfully the update went quickly and without any hitches. It seems the graphics adapter ROM was updated in BIOS 0027, changes which obviously carried over into the final 0043 firmware. Windows 10 needs the updated firmware for the adapter to be recognised correctly.

Back in Windows I simply had to select Update Driver on the unknown graphics adapter, choose from a list of devices already on the PC and let Windows find the best match. Seconds later the driver was installed and the 3rd monitor sprang to life.

The PC should continue to work for a few more years until eventually it gets replaced. The motherboard was a very nice one for its time, but lacks many modern useful features such as UEFI capable network boot, a graphical UEFI, UEFI Secure Boot etc. Still, it goes to show that with firmware updates and a little patience, even older hardware can still be relevant and useful in this day and age and doesn’t need to be carted off for recycling just yet. There’s plenty of power still left in Sandybridge era hardware, depending on your usage scenario of course.

Advertisements

Adding Office365 licenses to new users via PowerShell

December 12, 2017 Leave a comment

One of the tasks any school has to do each year is remove old students and add new ones. Using the built-in CSVDE tool, you can bulk import users into Active Directory very easily. Once there, they’ll get synchronised up to Office 365 (provided they are in an OU that is selected for sync) as new users. Good stuff! The only problem is that all those new users do not have licenses assigned to them in Office 365, which means they can’t use anything. You could manually assign a license to each user individually using the Office 365 website, but that will take hours, if not days if you have a huge number of students to license. Thankfully, there is a better way: PowerShell.

A very small script that is only 11 lines long will load usernames from a seperate CSV file and assign licenses to users based on that CSV file. Here is the script:

Import-Module MSOnline
Connect-MsolService
$users = Import-Csv “C:\Users\Username\Desktop\2018.csv” -delimiter “,”
foreach ($user in $users)
{
$upn=$user.UserPrincipalName
$usagelocation=$user.UsageLocation
$SKU=$user.SKU
Set-MsolUser -UserPrincipalName $upn -UsageLocation $usagelocation
Set-MsolUserLicense -UserPrincipalName $upn -AddLicenses $sku
}

Essentially, the script connects to Office 365 with your credentials (use an admin level account to connect). Change the location of the CSV file to your own location. The contents of the CSV file is simple, just 3 columns in total – column 1 is the User Principal Name of each student, column 2 is the 2 digit country code of your country and column 3 is the product license you want to assign to the student. Name the 1st cell in each column UserPrincipalName, UsageLocation and SKU respectively. You can find out what the exact license names for your Office 365 tenancy are by connecting to it as follows in PowerShell:

Import-Module MSOnline
Connect-MsolService
Get-MsolAccountSku

You will end up with a list of license options for your tenancy, with a name along the lines of tenantname:STANDARDPACK. Copy and paste your desired license name into your CSV file for each user you want that license for.

Run the above script when you are happy with your CSV import file and if all goes well, a few minutes later all the users in Office 365 will have been correctly licensed.

You could get more complicated so that each license if configured with certain options disabled etc, but that involved extra complexity in your script. Keep it simple I reckon.

DHCP Relay: the basics

November 26, 2017 Leave a comment

If you run a small flat network, DHCP just magically works once it is set up. Devices get their addresses, devices communicate, everything works and everyone is happy. The moment you partition the network with VLANS however, things change. Devices in the additional segment(s) no longer receive DHCP packets. There are 3 options available to rectify this issue:

  1. Manually configure static IP addresses. Painful but will work.
  2. Set up a DHCP server per additional VLAN. Lot of duplicated work and if you aren’t careful, DHCP packets can end up crossing VLANS, causing havoc with devices.
  3. Use DHCP relay to centralise IP address issuing from one central server.
  4. I’ve just recently configured DHCP relay at my school and it’s working well. Getting it set up is a tad tricky, but once you understand how it works, it’s quite straight forward. Here is a guide on how to do it on a network that runs Aruba switches and Windows 2012 R2 DHCP server.

    It should be noted that in order for this to work, you need a core switch that is capable of IP routing. Layer 3 switches will do this, as well some higher end Layer 2 switches from Aruba – the 2530 and 2540 models spring to mind. If you don’t have a routing capable switch in your network, you are going to need a router to be connected to each VLAN to do the job instead. Your VLANS must also be set up correctly with untagged and tagged ports for this to work.

Firstly, decide on the IP ranges you want for your additional VLANS. Try to ensure you have enough space so that you don’t need to redo the scope later on.

Next, create these scopes with all the necessary extra bits in the Windows DHCP management console, but do not activate them when asked at the end of the wizard. Leave them deactivated for the time being.

On your core Aruba switch, assign an IP address to every VLAN that you want to use DHCP relay on. Make sure that this IP matches the range of your DHCP server scope, but that the address doesn’t conflict with something in the range.

Next, enable IP routing on the core switch:

conf t
ip routing
wr mem

Next, add the IP helper address to each VLAN you want to use DHCP on. On the switch’s command line, type the following:

conf t (if starting from scratch, not needed if you are still carrying on from the above step)
vlan 20 ip helper-address 192.168.0.10
wr mem


Substitute VLAN 20 for each additional VLAN ID and 192.168.0.10 for your DHCP server.

On each of your edge switches, do not give the switch an IP in any VLAN except your main or management VLAN that the core switch also resides in. Point each edge switch’s IP default gateway address to the core switch’s IP address.

On your Windows DHCP server, you will need to add some static routes to the server unless its default gateway is pointed to the core switch. Odds are that the server isn’t pointed to the core switch but rather to a firewall for internet access, so the routes will need to be added manually. Open up a command prompt and type the following:

route –p add 172.16.0.0 mask 255.255.254.0 192.168.0.75

Repeat the above command for each VLAN you want DHCP on. Substitute 172.16.0.0 with your own network, mask 255.255.254.0 with the correct subnet mask and 192.168.0.75 with your own core switch IP.

Lastly, activate the scope(s) in the Windows DHCP console. You can test things out by using a client PC in each VLAN and releasing and refreshing the IP address. You should be obtaining an address that is correct for each VLAN and there should be no spill over between the VLANS that will cause network chaos. You should be able to see the clients appearing in the Address Lease section of each DHCP scope.

Hyper-V bug in Windows 10 v1703

November 4, 2017 Leave a comment

I encountered a nasty little bug in Windows 10 v1703 Hyper-V a.k.a the Creators Update this past week. If you create a Generation 2 virtual machine and try to PXE boot that VM, regardless of whether it’s on an internal switch or bridged to an external network, you will end up with the following screen:

Hyper-

No matter what you do, the VM will not PXE boot. Disabling Secure Boot and fiddling with other options will not help. I was very confused by this problem, as I’ve PXE booted generation 2 clients before. A few searches later revealed this link which explains the problem in greater detail.

In short, the only answer is to either downgrade to Windows 10 1607 or upgrade to Windows 10 1709, which was released little over 2 weeks ago. Generation 1 VM’s are not affected and you can PXE boot them successfully, but they do have a higher overhead than gen 2 VM’s. How this bug crept into Hyper-V is curious to say the least, but at least there’s a definitive fix. I should add that the bug has not been fixed as of the latest cumulative update for 1703 and is probably unlikely to be fixed, given the way Microsoft now releases Windows 10 updates/upgrades.

Being a good net citizen: SPF, DKIM and DMARC records

Spam and Phishing emails are some of the more visible scourges of the modern internet. No one enjoys opening up their mailbox and seeing junk clutter up the place, or seeing a mail that tempts you to enter credentials somewhere because it looks legitimate. The war against Spam and Phishing is an on-going battle, with many tools deployed to try and keep a user’s inbox clean.

If you own or manage a domain on the internet and that domain makes use of email, it’s only right to be a good net citizen and set up SPF, DKIM and DMARC records. Together those 3 make a 3 pronged fork that can be stabbed into the heart of junk mail, but they each do a slightly different thing. Let’s take a look at them:

SPF essentially denotes who is allowed to send mail for your domain. Anything that doesn’t match the details in the record is to be considered an attempt to spoof your domain and should ideally be rejected, provided the record is set up as such. If you have a small domain with simple records, SPF is incredibly easy to set up. It becomes harder if you are a giant corporation or have lots of mail being sent from third party bulk mailers, but even those use case scenarios can be brought into line so that you have a valid SPF record. If Microsoft, Google and others can do it, why can’t we?

DKIM is a little trickier. DKIM enabled servers sign outgoing mail with a digital signature and lets receiving servers validate the signature using the published key in the DKIM DNS record. This way, mail can be verified as having been sent from domain abcdefg.com because the signature can be verified by consulting the DKIM record in abcdefg.com’s domain. If the validation fails it’s either because the mail was forged or the message modified on route. Since spammers aren’t running your mail server, they can’t validly sign outgoing messages with your private key, so when a destination server checks the signature, the check will fail.

DMARC sits on top of SPF and DKIM. While SPF contains syntax for what to do when mail fails a check, DKIM does not. DMARC essentially tells a recipient mail server what do with those mail if they fail the SPF/DKIM checks. Mail can either be allowed through to be processed as the destination sees fit, sent to the Spam/Junk folder or rejected outright. Set to reject mode and along with an –all syntax in SPF, this will ensure that spammers cannot spoof mail from your domain (in theory)

It’s not perfect though. In order for the 3 records to be effective, the destination mail server needs to check the records. If the server doesn’t and simply accepts mail as is, junk mail will make it into the inbox from forged senders. The records also don’t help if a spammer compromises a legitimate account in a domain with all 3 records, as when the mail is sent out via that domain, it will pass all checks on the destination end, as it was sent from a domain with valid records. To prevent this, you’ll need to set up rules to detect outgoing spam and block it from being sent. Each mail server will have different instructions on how to do this.

Office 365 and G-Suite all include records for SPF, while DKIM takes a few more steps to set up in Office 365. G-Suite also supports DKIM as far as I know, but since I don’t use the product, I don’t know how hard or easy it is to set up.

While nothing is ever perfect in the war against spammers, a huge amount of junk mail could be stopped cold if more domains published valid SPF, DKIM and DMARC records. Banks and financial institutes that are a favourite target of fraudsters could save themselves a lot of grief by having destination domains reject all mail that isn’t legit. IP block lists and content filtering will remain an important part of the game, but if more junk mail could be stopped at the edge before being accepted and processed, the better off the entire internet will become.

Categories: Internet, My tips and tricks Tags: , ,

Sage Pastel Xpress/Partner V12 and 64 bit Outlook

EDIT: After migrating the department to V17, there were no issues out the box. V17 has a much newer DLL file installed out of the box, which interfaces with 64 bit Outlook just fine.

The Sage Pastel Xpress and Partner products pretty much rule the South African landscape for accounting packages. Almost everywhere you go, you’ll find some Pastel product keeping the books up to date. Our school is no exception, running Partner for the 3 ladies in our accounts department.

With my recent move to Office 365 for mail, I installed Office 2016 64 bit edition on the PC of the debtors clerk to access her mail in Outlook. No problem there, everything worked as it should. However, a few days later she called me back as she was unable to send statements out of Pastel Partner, as the PC now through up an error message when Partner tried to invoke Outlook. This wasn’t a problem in the past, as we’ve only ever used the 32 bit editions of Office. Office 2016 is the first time we’ve installed the 64 bit edition of Office.

It turns out that out of the box, Partner V12 can’t interface with Outlook 64 bit. I don’t have a 32 bit edition of Office 2016 at work, so I needed to get the functionality restored. Luckily, a bit of internet searching revealed the answer: use a replacement DLL file on the Partner installation disk. The process is as follows:

  • Close Partner and Outlook.
  • Copy the NewMail.dll file from the Pastel disk\Utils\Outlook 64-bit folder to C:\Program Files (x86)\Common Files\Softline Pastel. Overwrite the existing file.
  • Run the Component Setup utility in the Pastel folder in the Start menu. This will briefly re-register files, including the replaced DLL file.
  • Try to mail any statement from inside Partner, it should now invoke Outlook correctly.

I checked a Partner V11 disk and this didn’t contain the DLL file, so I assume it only started being introduced with V12. It’s possible that using the DLL file from the V12 disk would work with previous Partner versions going back a while, but I don’t know how compatible or reliable it will be. I have yet to test Partner V14 or V17 to see if they are compatible out the box or will also need the DLL file replaced. Since V14 and V17 were digital downloads with no extra folders, it’s going to prove interesting when I migrate the accounts department.

Ddrescue to the rescue

September 20, 2014 Leave a comment

A few weeks back, thanks to the blue screen caused by Microsoft’s batch of faulty updates, I formatted a teacher’s class computer and redid it from scratch – this was before I managed to find the work around to fix the blue screen issues. The computer was running fine since then, until this past week. The teacher started complaining bitterly about how slow the PC had become. I checked for malware, as well as for any other crappy software that may have been causing the slow down. I found nothing. I asked the teacher to monitor the PC, while I investigated further.

A few days later, the teacher was even more frustrated with the machine. Now it was taking forever to start up, shut down and was hanging on applications. I looked through Event Viewer, only to discover ATAPI errors were being logged. Not just one either, there were dozens of errors. The moment I saw this, I knew that the hard drive was on the way out. While the SATA port could be faulty or even the cable, the odds of those being the culprits were rather low. Too many bad experiences in the past have taught me that it is almost always the drive at fault.

I procured a spare drive and decided the quickest fix was to simply clone one drive to the other. Using Clonezilla I tried to do the clone. On my first pass, about 75% of the way through the PC looked like it went to sleep and I couldn’t see any output on the monitor. I couldn’t revive the PC, so I rebooted and tried the procedure again. This time, it got up to about 97.5% before it crashed out. Based on what I saw, Clonezilla was hitting bad sectors, corrupt files or the mechanical weakness in the drive. Now I was getting worried, because any more cloning attempts could hasten the end of the faulty drive. Not only that, it was wasting time. Setting up the PC from scratch again was my last resort, since it would take hours. Before I gave up and did that, I remembered Ddrescue.

I had tried to use Ddrescue on my home computer more than a year ago when the hard drive holding my Windows 8 install died. Sadly, that drive was too damaged even for Ddrescue to be able to save. I was hoping that this hard drive of the teacher hadn’t yet hit that stage.

I ran Ddrescue and then waited as the drive literally copied itself sector by sector over to the new drive. What I wasn’t aware of is that Ddrescue doesn’t understand file systems – it just copies raw data from one drive to another. This means it will copy any file system, but in order to do so, it must copy every block on the disk. A tool like Clonezilla will understand a file system and only copy used data blocks, therefore saving lots of time by not copying essentially blank space.

Ddrescue did hit one patch of bad data, but was able to continue going, then came back at the end to try and pull out what it could. Thankfully, whatever bad data there was wasn’t too major, and Ddrescue completed successfully. Booting from the new drive was a success, and best of all, the speed was back again. I did run a sfc /scannow at the command prompt to check for any potential corrupt system files. SFC did say it fixed some errors, and I rebooted. Apart from that, it looks like I managed to save this system in the nick of time. The old hard drive was still under warranty, and has been returned to the supplier. He can return that drive and get a replacement for us, which will become a new hot spare for some other classroom.