Archive

Archive for July, 2010

Tip to remove incorrect tags in Windows Live Photo Gallery

July 24, 2010 2 comments

If you use Windows Live Photo Gallery like I do to manage your photos, the ability to add tags to photos is a very useful feature. Even better, in Windows Vista and 7, those tags get indexed by the system, so searching for photos becomes even easier. For example, if you tagged “mom” in a photo and search for “mom”, the tagged photos will come up, irrespective of what their actual file name is. However, there is one small problem with this, a human problem: spelling mistakes! 🙂

Sometimes you can tag a photo correctly but with incorrect spelling, with the result that the incorrect tag is now in the list of predefined possible tags for other photos. These appear in the drop down menu when adding tags, and there is no way to remove them from this list, at least not in the application itself. However, there is a way to correct this using the Registry Editor. Do as follows, but make sure you are comfortable working in the Registry first.

  • Click on Start, and type in Regedit in the search bar (Windows Vista and 7), or Start, Run, Regedit (Windows XP)
  • Navigate to HKEY_CURRENT_USER\Software\Microsoft\Windows Live\Photo Gallery\Library\PreviewPane\LabelAssignment\MRU
  • Delete the offending tag from the various registry keys
  • Since this is based on the Current User Hive in the Registry, deleting tags here won’t affect other the tags of other users on the same computer.

This won’t remove the tag from tagged photos, but will remove the tag from the predefined options so that you don’t use it again in the future. You can remove the incorrect tag from the photos and apply new correct tags to them.

This tip also works with the standard Windows Photo Gallery in Windows Vista, but the main key is Windows Photo Gallery instead (not 100% sure, been a long time since I looked. Still, you should see it pretty quickly)

Cloning a whole classroom with FOG – photos

I’ve been sitting with these photos for a long time now. I took them when my colleague and I cloned our main computer lab a few months back. I thought it would be a nice idea to take some photos and show the power of FOG with multicasting. All 38 computers were being imaged at the same time, and thanks to a much improved network backbone, the speeds were much more constant. Enjoy!

IMG_0050 Computers PXE booting

IMG_0051Other half of the room

IMG_0054 

A closer look at the info on screen

IMG_0057 

Almost ready to go…

IMG_0060 

Away they go…

One of those days

It’s been a while since I last posted here. I’ve had some ideas floating around in my head for future topics, which will be coming soon. Things have just been relatively quiet the last few weeks, to the point where I haven’t had much new to write on.

Today was one of those days network administrators hate. It seems everyone needed myself and my colleague, all at the same time. Hardware I attempted to fix did not get fixed by what I thought to be a simple fix. Students were hounding us for acceptable use forms, though they were given forms at the start of the year. Now that they urgently need to get the forms to complete a subject choice program, they rush to us.

I am just happy it’s weekend, so that I can catch my breath a bit and recover, as well as plan for the upcoming week. More than likely though, I’ll end up listening to music, finishing off season 1 of Stargate Altantis or playing more Dragon Age:Origins. A person has to switch off sometimes as well…

Has Windows 7 killed Desktop Linux?

In the latest PCFormat magazine here in South Africa, there is an article asking this very question. While never getting around to actually making a firm statement, it did get me thinking about the topic, enough so to risk a blog posting about it. I know that these posts can generally lead to a flame war, but I decided to share my views anyway.

In my humble opinion, the answer to the question is yes. Linux on the desktop has improved a thousand fold since the I first found out about it in 1999. From an ugly almost unusable desktop experience, slowly but surely it has polished various components to where it is almost unrecognisable as the system from 11 years ago. Driver support has shot up in the last few years, to where most things are supported out the box. Installation is a breeze, almost fully automated. A far cry from the old days indeed.

For all that however, it has the same issues as it has always had. Lack of true gaming ability, users not being familiar, users being unaware of where to get help, lack of big commercial software and applications. When Windows Vista came out, Linux supporters were salivating at the chance to convert Windows users over, citing performance, security, cost and so on. A small number of people did convert, and the blooming netbook market also helped to increase market share of Linux. 

Ultimately, the effort failed to pose any serious threat. Why? Because no matter what distro you choose, there is still a lack of cohesion and polish that comes with Windows or OSX. Taken another way, that is actually a compliment, as it shows how far Linux has come. That last point may sound inflammatory, but the advantage these systems have is that they are written by a team of people who code to the same goals and targets. Applications are designed to integrate well together, and there are stable documented programming interfaces underneath the applications. Linux has this as well, but it is an ever shifting platform as new features are added almost every day.

Because of its very modular nature, a Linux distro has different standards of coding. One group writing one application has different goals and skill levels than another group writing something else. Distro maintainers then package all these together to make a typical Linux distribution. The situation has improved mainly due to Ubuntu really trying to standardise and polish the user experience, but I still see it fractured elsewhere.

Another problem is the sheer number of distributions out there. Discounting the specialised distros, there are a lot of distros that simply fork an existing distro, apply some new artwork and a few tweaks, then try and set themselves apart from the crowd. Instead of contributing code back upstream to improve the source project, time and effort is wasted in duplicating something that already exists. How do you explain this to a very unclued up end user? What makes distro X so much better than distro Z?

Application compatibility is another sore point. With Windows, you develop your application and provided it is coded decently, it can run on anything from XP through to 7, no matter if it’s a Home or Professional edition. With Linux, you are never quite sure about that, as each distro has a slightly different kernel, glibc version and so on. Distributing software as source solves this, but it leads to painful software management. If you don’t have internet access, the online software repositories mean nothing, and these ironically are often the easiest way to get new software and updates for the system. What if you are developing a proprietary product that can’t be distributed via the repos? What easy to use universal graphical installer is there?

With all the talk being about applications moving to the cloud in the future, people are predicting that what OS you use will become less important. To me that is a post for another day however. What I want to end off with is to say that Desktop Linux is unlikely to ever truly grow past a point, unless someone takes the system and develops everything together as a whole. The FreeBSD project does this, they develop the entire base OS, then add on the GUI and other applications. If the FreeBSD people were interested in taking it further, they could build up a tightly tied together OS that could eventually challenge anything. The problem is that they don’t have enough programmers to do do that, and it would be pointless to duplicate work already done. It does however offer a glimpse of what could be…