16 July 2013

Hard Drive to .VHD

Let’s say you have an XP PC that died, and you want to run that installation in a virtual machine (VM).  The first step will be to harvest that installation into a virtual hard drive.  Different virtualization host software supports different file types for virtual hard drives, but I’ll be using the .VHD standard, which should work in Virtual PC, VirtualBox and VMware Player.

Preparing the physical hard drive

The hard drive’s from a failed PC, so step one is to be sure of the hardware and file system.  The drive is taken out of the dead PC, and dropped into a known PC that is set to boot off safe maintenance OSs (mOS) and tools such as Bart, WinPE, Sardu, BootIt Next Generation, etc.  Do not allow the hard drive to boot!  You will also want to connect another hard drive with enough space to swallow what you will harvest, and make both connections using internal S-ATA or IDE rather than USB, for speed and SMART access.  FAT32 limits size of a file to 4G, so if you are using Disk2VHD or capturing a .WIM image, your target drive file system should be NTFS.

First, I boot my Bart DVD and from there, launch HD Tune, and look at the SMART details.  You can use any SMART tool you like, as long as it shows the raw Data column for these four attributes: Reallocated Sectors, Reallocation Events, Pending Sectors and Offline Uncorrectable sectors.  For all of these, the raw data should be zero; if it does not, evacuate the stricken drive first as files, then as partition image, before doing anything else, including surface scan and other diagnostics.  Don’t trust SMART tools that only show “OK” or “Fail” status, that’s next to useless.

If the hard drive is physically OK, then check and fix the file system from a compatible mOS.  If the volume to be salvaged is NTFS, then the version of mOS should be same or higher as the OS installed on the drive.  So you can use WinPE 4.0 for any Windows, Windows PE 3.0 for Windows 7 and older, WinPE 2.0 for Vista and older, and as we’re after XP in this case, we can use any of those plus Bart, which is built from the XP code base.  Run ChkDsk /F and capture the results, either via command line redirection (you may have to press Y if nothing happens, in case there are “Do you want to…?” prompts you can’t see) or by capturing text from the command window.

Next, you will want to retrieve the Windows product key from the installation, in case that has to be re-entered to activate Windows when booted in the different (virtualized) PC.  I use Bart as the mOS for that, along with two free add-ons.  The first is the RunScanner plugin for Bart, which “wraps” the inactive installation’s registry hives as if that installation had booted these into action, so that registry-aware tools will see these as native (unfortunately, there’s no equivalent to Runscanner for Vista and later).  The second is ProduKey from Nirsoft, which reads the keys you need.

The final preparation step is to zero out free space so that these sectors are not included by certain types of harvesting process, as they will bloat up the size of the .vhd you will eventually create.  You can download SDelete and use that from Bart; the –c –z options will create a file full of zeros to fill the free space, and then delete the file.

Harvesting as loose files

In Windows 9x, if you copied every file to a new hard drive and created the correct MBR and PBR contents, Windows would boot and run just fine.  This is no longer true for NT-based OSs such as XP, but you may still want access to loose files, and if the drive is failing and dies soon, that may be all you get - and more useful than a partial partition or installation image.

I use Bart as the mOS for this, finding it easier to work from a GUI shell than command line.  I have Windows Directory Statistics integrated into my Bart, and use that to compare file counts to be sure I haven’t left anything out.

Harvesting as partition image

There are various partition managers that can boot off CD, and the one I’ve been using is BING (Boot It Next Generation).  If you use BING in this way, it may show an installation dialog when it starts; cancel that, as you don’t want to install it as a boot manager.  Then go into partition maintenance mode and work from there.

You can also boot BING within the virtual machine, from physical disc or captured .ISO, as long as the VM’s “BIOS” is set to boot optical before hard drive.  BING can be used in this way to resize partitions within .VHD, but cannot change the size of the “physical hard drive” as seen within the VM; use VHDResizer for that, from the host OS.

BING can also save a partition or volume image as a set of files, of a maximum size that you can select to best fit CDRs, FAT32 file system limitations, etc. and that’s quite an advantage over .WIM and .VHD images and the tools that create them.

When in BING, it’s a good idea to display the partition table, and take a picture of that via digital camera, in case there are any “surprises” there that kick in when attempting to boot the virtual machine later.

I can also use DriveImage XML as plugged into Bart, as a tool to create and restore partition and volume images; unlike BING, you can browse and extract files from within the image it creates.  There are probably other backup tools that can do the same, but make sure they have tools to work from bare metal, and that these tools work within virtual machines, as Bart and BING can do.

Harvesting as installation image

You can use Microsoft’s imaging tools to create a .WIM; a file-based partition image with the OS-specific smarts to leave out page file, hibernation file, System Volume Information etc.  Because in this case the original PC is dead, we don’t have the option to generalize the installation via SysPrep; a relief in a way, as SysPrep is so rough you’d want some other full backup before using it.

Access to these imaging tools was difficult at best, in XP and older versions of Windows; you had to be in corporate IT or a large OEM to legitimately get these.  You can now download what you need from Microsoft for free, though it’s a large download, and in any case the full OS installation disc can now boot to a command line and function as a WinPE.

In Vista and 7, you’d use a command line tool called ImageX to harvest (capture) to .WIM and apply .WIM to new hard drives, and can add a free 3rd-party tool called GimageX for a less mistake-prone UI.  In Windows 8, you’d use the DISM command instead, and I’ve not sought or found a GUI front-end for that; instead, I’m using batch files to remember the syntax involved.

Harvesting directly to .VHD

There are said to be many tools for this, but I’ve only found one; Disk2VHD.  There’s also something called P2V, but appears to be part of a costly software product aimed at the corporate IT sector, and may apply more (only?) to Microsoft’s Hyper-V virtualization technologies.

Disk2VHD boasts the ability to image partitions that are in use by the running OS, via that OS’s shadow copy engine.  Unfortunately, that is the only way it can work – so it will not run from Bart or WinPE.  You are obliged to boot a hard drive based Windows to host the tool, exposing the hard drive to be harvested to whatever that OS may do. 

That’s too risky for at-risk hard drives, as Windows tends to fiddle with every hard drive volume it can see.  WinME and XP are the worst offenders, as they enable the System Restore engine on every hard drive volume detected and immediately start writing to those file systems.  Al least Vista, 7 and 8 don’t do that!

It’s important to remember that Disk2VHD captures entire hard drives, not just partitions or volumes, even though the UI implies the latter by allowing selection of partitions and volumes to be included.  For example, if you have a 64G C: on a 500G hard drive and you deselect all volumes other than C:, you will create a virtual hard drive 500G in size with a 64G C: partition on it, the rest being left empty.  You may have hoped for a 64G drive filled with C: but that is not what you’ll get.

Size matters

Guess what the sizes of these various harvestings will be, compared to original drive?  Then check out the results of doing this for real, or a mature in-use XP installation with shell folders relocated out of C:

  • 500G - capacity of original hard drive
  • 30G – size of original NTFS C: partition
  • 12.0G – size of files harvested
  • 13.5G – size of files harvested, as occupied space on FAT32
  • 7.48G – size of BING image file set
  • 4.69G – size of .WIM image created from WinPE 3.0 and ImageX
  • 12.6G – size of .VHD file as created by Disk2VHD after zeroing preparation
  • 10.8G - as seen within .VHD via Bart boot within the VM

Note that .VHD are ignorant of the file system and OS within; this is why it’s inappropriate to blame the tool’s creators when a harvested installation fails to boot within a VM.  A significant effect of this is that any sectors containing anything other than zeros, will be included as explicit blocks within the dynamic and differencing types of .VHD, which would otherwise have saved host space by leaving out empty sectors.  The .VHD manages space in large blocks, so this effect is made worse; if any sector in a block is non-zero, the whole block is added to the .VHD

Of these, the .WIM is the most compact (I capture using the strongest compression offered); then the BING image file set.  After that, things are pretty much as you’d expect, though even with zero optimizing preparation and before using the .VHD, the (dynamic) .VHD file is already significantly larger than the files it contains.

Creating a new .VHD

If you used the Disk2VHD tool, you already have your .VHD populated – but it may not be a physical size (as seen from within the VM) that you’d like.  In theory, if the partition size is limited, the “physical” space outside that should never be written to, this never contain anything other than zeros, and thus never add size to any dynamic or differencing .VHD file on the host.  In practice, you may prefer to constrain the physical size of the virtual hard drive, especially if choosing the fixed type of .VHD that always contains every sector as-is, regardless of content.

When creating a new .VHD you set the capacity of the hard drive it pretends to be, and whether the .VHD will be of fixed or dynamic type.  The fixed type is the .VHD equivalent of a fixed-size pagefile; if it’s created as an unfragmented file, it should perform better than one that grows in fragments over time.

Either way, your host volume should have enough free space to contain the full size of the .VHD’s internal capacity, or at least that of all partitions and volumes within the .VHD you intend to ever use.

You can also layer .VHD over each other, with a fixed or dynamic .VHD as the base.  Each layer above that will be a differencing .VHD, valid only as long as the lower layers do not independently change.  Both differencing and dynamic .VHD use the same storage model, which is like a “super-FAT” chain of large blocks that explicitly exist only if any contents within have changed, relative to the layer below.  Under the base layer is assumed to be all zeros, so a block that has never contained anything other than zeros, need not explicitly exist in any differencing or dynamic .VHD

That means every .VHD layer may grow as large as the size of all partitions and volumes within it; host free space should be available accordingly.

Because changes to an underlying .VHD layer will invalidate all layers above, they are generally used in two ways.  You can have a base image that is kept read-only so that it can never change, and this can be the “installation image” over which multiple VMs can run, each with their own changing differencing .VHD; this is how XP Mode is set up.  You can also use a differencing .VHD above that, which is considered disposable until it is merged with the .VHD below; you may use that for guest accounts, malware investigation and other testing, kiosk use etc.  Virtual PC may use this as “undo” disks, though these use the .VUD rather than .VHD file type.

To merge .VHDs, you need enough free disk space on the host for the output .VHD that could be as large as the .VHD being merged. in terms of workspace required.  To compact a .VHD (i.e. discard any blocks full of zeros in dynamic .VHDs) you need enough free host disk space to create the new .VHD; these considerations make .VHDs costly in terms of disk space and hard drive head travel to partitions beyond where they are stored.

Populating the new .VHD

If you used partition imaging tools like BING or DriveImage XML, or installation imaging tools like ImageX from WinPE, you will need to write these images to the new .VHD you created above.  You may also need to move contents from one .VHD to another, if you need to change the base .VHD type, or don’t want to use VHDResizer to change the size of the physical hard drive contained within the .VHD

One way to do this, is by using these “real” bootable discs within the virtual machine, either by booting the VM from the physical disc, or by capturing the relevant .ISO and booting the VM into that.  If you can’t “see” host hard drive volumes within the VM then the materials should be copied into another .VHD that is attached as an additional hard drive before starting the VM session.  You can do that in suitable host OSs (e.g. Windows 7) by mounting the .VHD for use as if it were a native hard drive volume; otherwise you may need a VM that can see outside via network shares etc.

4 July 2013

Build One Skill in 2013? Virtualization

My life’s been fuller for the last five years and will hopefully continue to be, so I generally don’t devote such huge amounts of time to particular interests.  Still, whatever time I do have to build new skills, I’ll probably spend on virtualization, i.e. running one OS within another.  That competes with Windows 8 for attention; I’ve slain the closest crop of Windows 8 dragons, but that’s another story!

I have three “egg-eating snake” client jobs at the moment, i.e. with jobs that can’t be fully gulped down in one session.  If they’re reading this, they will recognize themselves; common to all three cases, is (or may be) virtualization.

Client A had an XP PC that failed at the motherboard-and-processor level, with several installed applications that couldn’t be reinstalled on a modern Windows PC for one reason or another.  The hard drive was dropped into a tested-good used PC of similar vintage, various dragons were slain etc. until all was working, but delivery was postponed while a courtesy laptop was in use.  During this time, the PC became slightly flaky, then increasingly so; hardware tests were initially OK, but over several months, it has deteriorated to the point it is clearly unfit for use.

By now, a large amount of work has gone into the system; how can one preserve that value?  I could look for another set of XP-era hardware to host the hard drive installation, but it may be better to harvest that into a .vhd and run it virtualized within a modern PC.  Once in a .vhd, it should survive being ported from one PC to another, as long as the same virtualization solution is used to host the .vhd – but will the installation survive the conversion, and what should the host be?

Client B is already happily using XP Mode on Windows 7 to keep old software alive, but a crisis is looming because the .vhd is growing in size to fill the host’s C: space.  XP Mode is built by combining a pre-defined read-only .vhd with a differencing .vhd; these combine to produce what XP Mode sees as its “C: drive”. 

Within XP Mode, this virtual “C: drive” is 126G in size, with only 5G or so used.  But outside in the host OS, while the parent .vhd is a comfortable 1.1G in size, the difference .vhd is an absurd 19G+, leaving under 1G free space.

Management of this issue is constrained by .vhd decisions cast in stone by XP Mode, and it’s not clear whether the installation will survive changes to the .vhd (e.g. merging the parent and child into a non-differencing .vhd, transferring contents to a fresh smaller and/or fixed-size .vhd, etc.).  It’s also unclear whether there’s any predictable maximum size limit to this .vhd bloat, and thus whether a one-off enlargement of the host C: partition (at the expense of extra head travel to other volumes) will permanently fix the problem.

Client C has a new laptop with a geographically-broken warranty/support trail, that has an edition of Windows 7 installed that does not match the OEM sticker.  After a failing hard drive was replaced, Windows demands to be activated, and this fails with both the placeholder key used within the installation, or that from the OEM sticker.

So he has an “upgrade opportunity” to choose (and alas, buy) whatever version and edition of Windows he likes, and this choice is complicated by the need to run an important “needs XP” application that hasn’t yet been tested within XP Mode or other virtualization.  Which virtualization host should he use?  That choice affects that of the OS; Windows 7 Pro for XP Mode (the solution in which I have the most experience), Windows 8 Pro for Client Hyper-V (may be faster, may integrate less well, needs XP license) or client’s choice of cheaper 7 or 8 if using VirtualBox or VMware Player (both of which will also need an XP license).  Where are such licenses going to (legally) come from, in 2013 onwards?

Common to all three clients, is a need for virtualization skills.  I need to be able to convert from “real” installations in hard drive partitions to .vhds, get these working in at least one free virtualization host, and be able to manage file size and other issues going forward.  XP Mode integrates well and includes the XP license, but dies with Windows 7 and needs the costlier Pro edition; it may be better to abandon that in favor of VirtualBox or VMware Player, which aren’t chained to particular editions and/or versions of Windows.  If those also work from Linux, seamlessly hosting the same .vhd installations, then that would be a deal-clincher; I could skip bothering with Windows 8’s Client Hyper-V altogether.

There are more details (and especially, links) on these scenarios in this recent post.

Why I (Don’t) Blog

If you find yourself in the situation where you have to present a non-trivial amount of information to people, success may depend on the method of communication you choose to use.

Face to face

Rich impression, poor retention, and very demanding of resources!  When someone’s taken the trouble to be physically present, you should put them first, above lazier contacts such as phone calls, and neither of you will get any other interactive work done during your meeting.

So if there’s a non-trivial amount of contact to deliver, rather send that via fax or email ahead of time, so that the meeting can be pure live interaction rather than “death by PowerPoint”.  Make sure you allow enough time for this pre-meeting information to be processed; in the days before email, I’d budget a day per fax page.  Need folks to read a 5-page fax before a meeting?  Send it at least 5 days before.  Not enough time?  Edit the content down to fewer pages.

Telephone

The pits; in my opinion, the “voice telephone” model of paid-for real-time voice-only calls should have died in the 20th century, along with telegraph Morse code.  Very intrusive real-time demands, no content logging or retention for either party, and worse communication effectiveness than face to face (clothespeg-on-the-nose sound due to poor frequency response, no non-verbal gesture/expression cues, etc.).  And then there’s “telephone arm”.  What was the other idea?

I’d also refuse to do any significant interactions over the phone, especially with larger entities that “record calls for quality purposes”.  If things escalate to a courtroom appearance several months down the line, guess who’s going to look like the unreliable witness?

Skype

Ahhh yes, now we’re getting somewhere.  No cost (at least if you’re on ADSL rather than some ghastly rapaciously-priced mobile Internet access), text chats are logged, files can be sent and links or text pasted in and out, and there’s sound and video available too.  It’s real time, but “sticky”; a pending chat is more likely to grab your attention when returning to your system than an emal would, and can be continued at any time.

Email

Best of the lot; excellent logging and reusability, the best way to send content with the lowest risk of “glaze over” (which pretty much kills phone and face-to-face for anything substantial).

Blog

If you find yourself having to say the same thing to different folks again and again, then it’s better to blog it and say “yonder post refers”.  That is why I started blogging, after getting that “stepping on ants one by one” feeling when I was active in usenet technical news groups.

But I find it much harder to write when I don’t have a specific audience in mind, and that is why I blog so seldom.  In fact, many of my blog posts are generalizations of content originally written for a specific person.

Living With(out) XP

Microsoft support for Windows XP SP3 (the last supported SP level for XP) is due to end April 2014.  By “support”, I don’t mean the help line center, but the testing and repair of exploitable code defects, as well as perhaps technical assistance to software and hardware developers.  Articles on vulnerabilities won’t state XP may be vulnerable; they will simply ignore XP as if it never existed. 

New or revised hardware will likely not have drivers for XP, and new software will start to exclude XP from the list of supported OS versions.  This is likely to be more severe than it was for Windows 2000, because there’s a far bigger difference between the 2000/XP and Vista/7/8 code bases as there is between 2000 and XP, or within the Vista to 8 spectrum.

The trouble is, many of us still have software that “needs XP”; that gulf between the 2000/XP and Vista/7/8 code bases makes it far harder for XP-era software to work on newer Windows versions, even as these versions attempt to support such limitations in various ways.  Some of this software may have hardware dependencies that can no longer be met, or delve “too deep” into OS territory for virtualization to manage; examples include licensing protection code, close relationship with hardware (e.g. DVD burning software), and deep-system stuff like diagnostics, data recovery or anti-malware activity.

There are different ways to accommodate old software:

0)  Keep (or find) an old system

If you already have an XP PC running what you need to use, then you could keep that – as long as it lasts.  Monitor the hardware’s health, by:

If the PC fails, you can try and repair or replace it, while preserving the existing installation.  Back that up, as the first boot may well fail in various ways; BSoD STOP crash, lockup, demand to activate Windows and other vandor feeware, protracted (and possibly doomed) search for new drivers, etc.  If you can get the hard drive installation working within tested-good hardware, that’s probably the best result from a compatibility perspective. It may be your only choice, if you lack installation discs, or pre-install material, and/or product keys etc. for your crucial “needs XP” software.

XP-era kit is quite old now, and reliability is becoming a problem – like buying a used car with hundreds of thousands of kilometers on the clock, to start a road trip across the Outback.  Digital systems are based on analog parts, and at the volts-and-microseconds level, slew times can grow with no apparent problem until they fall outside the digital yes/no detection criteria.  Metal-to-metal contact points get corroded, and you’ll often find an old PC either works fine, or not at all; maybe it “just needs this card to be wiggled a bit” to get it working again, etc.  Welcome to Creepyville, you won’t enjoy your stay.

An alternate approach may be to harvest the hard drive installation into a virtual hard drive (.vhd file) and try getting that to work as a virtualized PC; jump ahead to (4).  You’d run the same “different hardware” gauntlet as dropping the hard drive into a different PC, with added virtualization-specific issues.  So far I’ve had no success with this; it’s been a non-trivial mission to attempt, and I’ve only made the attempt once so far – but maybe better tools will help.

1)  Build a “new” old system

In other words, if an app needs Windows XP and can't run on anything later, why not simply build a "new" XP system?  That would give the most compatible result, and should run well.

In practice, XP doesn't properly embrace new hardware and/or use it effectively, degrading the value of the hardware.  There's no (native, or in some cases "any possible") support for:

  • AHCI, which reduces hard drive performance
  • 64-bit addressing, limiting memory map to 3G or so
  • USB 3.0, so you're limited to USB 2.0 performance
  • 2T+ hard drive capacity
  • Touch screens

Some of the above are not yet relevant (USB 3.0) or may never be relevant (touch screens, for which Windows 8 is designed) but others (AHCI, 4G+ memory access) already bite deep.  The 2T hard drive limit is also currently impossible to breach via internal laptop hard drives, but may become relevant for shared externals.

So building a "new" XP-only PC is something of a dead end, suitable only to shrink-wrap a crucial and irreplaceable application, on the understanding that the system won't be safe for general use (any Internet access, perhaps even inward file transfers via USB etc.).

Availability of new XP licenses is an issue, given that it has long been off the market as a saleable product.  In theory, Home costs the same as the cheapest non-Starter edition on later Windows, and Pro the same as corresponding versions of 7 and 8, but in practice you'd struggle to find legitimate stock.

There’s also uncertainty around the activation of Windows XP after support ends.  At the time the activation system was rolled out, we were assured that when Microsoft lost commercial interest in activated products, the activation system would be disabled so that software could be used without hassles, but it remains to be seen whether that promise is kept.  Worst-case scenarios are where new XP activations become impossible, possibly including those triggered by WGA (“Windows Genuine Advantage”), or even where existing installations are remotely disabled.

2)  Build a new system, app must "take its chances"

There's an element of "app must take its chances" involved in all solutions other than "build an old system" and dual-booting as such.  This is the most extreme case, where one simply ignores the application, builds a no-compromise modern system, and then hopes the old application can be made to work.

There are settings within Windows to treat applications as if they were running in older particular versions of Windows, but these don't handle every case successfully.

The main point of incompatibility arises from changes introduced with Vista, which attempted to curb the rampant malware threat.  Things that applications were formally allowed to do with impunity, are no longer allowed, and some apps aren't satisfied with fake compliance with what they demand to be able to do.  It's like moving from an unlocked farm house to a gated community!

Less likely to be a problem, is the change from 32-bit to 64-bit OS, as required to access over 4G or "memory" (i.e. RAM plus swap space plus allowance for non-memory use within the map).

Some software can break when the hardware is unexpectedly "big", i.e. "too much" RAM, hard drive space or processing speed, independently of the 32-bit vs. 64-bit thing.  But most application software should not have a problem with either set of issues as such, though there are some other safety changes that stealthed in during the change to 64-bit that could hurt.

3)  Build a dual system

It's possible to build and set up a PC to run either one version of Windows, or another.  Only one can be run at a time, the hardware has to be compatible with both, and each OS runs natively, as if it had the whole system to itself.

Hardware compatibility becomes something of an issue; you either stunt the new OS, or you have to manually toggle CMOS settings to match the OS you're booting so that the new OS can run at full speed in AHCI mode.

You should get as good a result as (1), but have the same problems finding a new XP license.  Both OSs have to have their own licenses, which is costly, but new OS can be a cheaper non-Pro edition.

There are some issues where each OS can trample on the other, if the C: partition of the inactive OS is visible.  I've dealt with such issues before, though not yet with Windows 8, and may have to adjust the specifics for that and/or if the free boot tools used previously, have changed version and/or availability. 

In essence, I use a boot manager to hide the “C:” OS primary partition that is not booted, so that each OS can see its own primary as “C:” and doesn’t mess up System Volume Information etc. on the other OS.  All data and other shared material is on logical volumes D:, E: and F: on the extended partition, leaving one free slot in the partition table for boot manager and/or Linux, etc.

4)  Virtualize one system within the other

In practice, that means virtualize the old within the new.  The reverse is possible in theory, but will work less well as the new OS won't get the resources and performance it needs, so your "new" PC would run like a dog in "new" mode.

This is something of a gamble, because not all applications will work when the old OS is virtualized.  Anything that needs direct access to hardware, and/or access to specific hardware, is likely to fail.  That shouldn't apply here, but may, especially if the application fiddles with hardware as part of attempts to prevent unlicensed use - another facet of how feeware sucks.

The choice of parent OS becomes complicated, i.e. from...

In all cases other than XP Mode within 7 Pro, you would need an XP license, as you'd do for solutions (1) or (3). 

If this approach works, it gives a more convenient result than (3) because you can at least run old and new OSs and the apps within them at the same time.  Interaction between the two systems may be as separate as if they were to physically separate PCs on a LAN, unless the virtualization host can hide the seams, as XP Mode may do to some extent.

I've had some experience with 7's XP Mode, but as yet none at all with 8's Client Hyper-V virtualization host.  So while 8 Pro and Hyper-V are more attractive going forward (Hyper-V is a more robust and faster technology than that of XP Mode), they are more of a jump into the dark for me at present.

3 July 2013

Space Invaders

A common hassle is software that hogs disk space on C:, and often the comment is made that “modern hard drives are so big, it doesn’t matter”.  The worst offenders are junk software (hello, Apple) that not only chews disk space, but has no settings to path that store to somewhere else.  This is something that the MkLink command can fix, by leveraging a feature of NTFS.

Another offender is Windows Installer, and the Windows update process, which also dump inactive files and “undo” junk in C:, with no facility to move it off.  I have not tried MkLink to address that problem.

Previously, these issues would annoy only those who don’t follow the duhfault “everything in one big doomed NTFS C:” approach to hard drive partitioning, but that is changing as SSDs mirror the practice of having a deliberately small C:, with large seldom-used material on hard drive volumes other than C:.  I’ve done this for years as a way to reduce head travel; SSDs do away with head travel, but are as small as the small size I’d use for an “engine room” C: containing no data.  Update and Installer bloat really hurt on today’s sub-PC mobile devices, for which puny SSD capacities are all that is available.

An invaluable tool to chase down space invaders, is Windows Directory Statistics.  You can add a non-default “Statistics” action for Drive and Directory (File Folder) to run this, but this will misbehave when the target is the C: drive on Windows versions Vista, 7 or 8; post-XP changes will cause these to show System32 instead of all C: in this particular case.

An Unusual Case Study

I did a site visit to troubleshoot file sharing issues on a serverless LAN of five Windows XP PCs.  All four of the “workstation” PCs in the office would shows the same unusual error dialog at the point one attempted to navigate into the workgroup via Entire Network, Microsoft Networks UI; the error referred to “insufficient storage”. 

All four of the “workstation” PCs had well over 10G free on C:, but the seldom-used fifth “backup” PC in another room had zero k of free space on C:, and fixing that, fixed the problem seen everywhere else.

In this case, the problem was caused by an insanely large log file for the “security toolbar” component of AVG Free 2012.  Now I always avoid installing “toolbars”, which are (nearly?) always useless things inflicted to serve the software vendorsinterests rather than those of the user, but updates (obligatory for a resident av) may reassert them.

A lot can go wrong with log files.  They’re typically opened and closed for every write, and are written to often, so that there’s a log up to a point of failure that could lose pending file writes due to a crash.  There may be unexpected overhead imposed for each time the file is opened and closed, which can make things even more brittle, and there’s often no sanity-checking on log file size, so a crash-log-repeat loop can get really ugly, real fast.

AVG’s prone to this sort of nonsense; in addition to large wads of update material, partially held in non-obvious places (MFAData, ProgramData subtrees, etc.) it can also spawn gigabytes in Dump files.  This is the second time I’ve seen an out of control AVG log file taking every available byte of space, and it’s annoying when this is due to an unwanted “toolbar” component that should not even be installed.  Without FAT32’s 4G maximum file size limit, this “text log file” grew to 5.1G, leaving zero space free on C:, so that “touching the network” caused needs that could not be met.

22 August 2012

Flash Offline Installers

Adobe and Java rival each other as the world’s most exploited software, forcing us to swallow vendor-pushed updates for fear of attack.  Flash, .PDF and Java are all edge-facing in a big way; Flash and Java from web sites, and .PDF both via web content and emaul attackments.  Many software applications auto-generate .PDF files send as attachments with generic message text, so the recipient has no “Turing Test” opportunity to exclude malware-automated vs. legitware-automated .PDF coming from “someone they know”.

So with that in mind, you’d expect both vendors to be abjectly apologetic, going out of their way to make it easy for users to download and apply the constant stream of repairs for their defective code.  Which is more or less true for Java, but Adobe is another story – I found this blog post that sums it up best:

http://www.pretentiousname.com/flash_links/index.html

So the quest is on to find offline installers for Adobe Flash and Acrobat Reader, that are:

  • Really from Adobe, and not malware fear-bait fakes or trojanized versions
  • Actually up to date, and not older versions
  • Ideally, are free of unwanted by-catch (Google this, McAfee that, etc.)

With Acrobat Reader, this is fairly easy; you can use Adobe’s FTP site.  But that doesn’t help you with Flash.

I found some links from here to bypass the buggy installer...

http://helpx.adobe.com/content/help/en/flash-player/kb/installation-problems-flash-player-windows.html#main-pars_header

Links from here appear to work...

http://www.adobe.com/products/flashplayer/distribution3.html

These links were newest version (as at 22 August 2012)…

http://download.macromedia.com/get/flashplayer/current/licensing/win/install_flash_player_11_plugin.exe

http://download.macromedia.com/get/flashplayer/current/licensing/win/install_flash_player_11_active_x.exe

http://download.macromedia.com/pub/flashplayer/current/install_flash_player_64bit.exe

http://download.macromedia.com/pub/flashplayer/current/support/install_flash_player.exe

…while these were several versions old...

http://download.macromedia.com/pub/flashplayer/current/install_flash_player_32bit.exe

http://download.macromedia.com/pub/flashplayer/current/install_flash_player_ax_32bit.exe

http://download.macromedia.com/pub/flashplayer/current/install_flash_player_64bit.exe

http://download.macromedia.com/pub/flashplayer/current/install_flash_player_ax_64bit.exe

Expect these links to shift around, as Adobe plays the shell game to force us to use their wretched online installers, complete with shoveware that rewards them for our need to fix their junk.

21 August 2012

Live Writer vs. Blogger; Picture Uploads “Forbidden”

I’ve found it very useful to catch screen shots via PrintScreen key to Irfan View, or using camera with flash off and macro on if pre- or post-Windows, and pasting these into support emails. 

So what should be easier than to do the same thing here, in blog posts? 

When I last looked at this, it was a fiddly affair, requiring a separate host for the picture files etc. but surely in this pro-Cloud age, those issues should have gone by now.

Apparently not; when trying to publish from Live Writer to this Blogger blog, this failed with “Forbidden”, and remained so until all pictures were removed.

So then I waded through Blogger’s online editor to pull up the pictures into the post.  That worked, for very low values of “work”; pictures were blurry, and the Blogger editor stripped spacing between paragraphs (spacing is always a bit of a sore point with HTML).  What a mess!

Behind the scenes, Blogger stores “blog” pictures in Picasa on the web.  So I logged into that and uploaded the pictures I wanted to use there - then in Writer, I pasted in the picture links from a page where I’d navigated to the pictures I wanted to use.  Still a cumbersome and messy procedure, but at least the text formatting wasn’t screwed up and the pictures look reasonable (as they should; all I want is to show small pictures at original size).

LibreOffice 3.6 “The Selected JRE Is Defective”

Having got past some initial installation hassles that required deleting my LibreOffice profile, I hit a problem with Java, while in Tools, Options.  Here’s how to test this if it happens to you; go to the MediaWiki section in Options…

If you have the problem, you will get this error dialog:

I “fixed” this by installing the 32-bit Java JRE 6 update 33, being the current most updated version of the fading Java 6 line.  It has to be 32-bit as LibreOffuce is a 32-bit application (and fair enough), and it has to be Java JRE 6 rather than 7, because for practical purposes, LibreOffice 3.6 doesn’t work with modern Java JRE 7.5

There’s a lot of “UI pressure” at the Oracle site to download and use JRE 7 rather than JRE 6, which I took to mean 7 is fairly mature and 6’s days are numbered, so I recently switched from the 6 update 31 I was using, to the current 7 update 5.

There’s also a lot of detail on LibreOffice 3.6 and Java JRE 7, claiming that the new Java is supported, why Oracle’s poor installation practices get in the way, and how one overcomes this.

Originally, the LibreOffice code base started as Star Office, which was acquired by Sun and user as a poster child for Java.  This continued with OpenOffice, but since the developers left after the Oracle takeover, the intention is to dump Java.  I’d be very glad if they did, because:

  • LibreaOffice already loads faster than OpenOffice after reducing Java
  • Java is edge-facing, frequently exploited and frequently updated
  • LibreOffice lags behind effective support for latest Java updates
  • Java installs tend to leave exploitable older versions in place

The last is a very old issue that still hasn’t gone away completely.

LibreOffice 3.6 “Unhandled Exception” Error

I’ve just upgraded from LibreOffice 3.5.2 to 3.6.0, and ran into two sets of problems; the one documented here, and issues with Java

I didn’t install over my old version as I usually do, because this is noted not to work in the release notes for 3.6…

For Windows users that have LibreOffice prior to version 3.4.5 installed, either uninstall that beforehand, or upgrade to 3.4.5. Otherwise, the upgrade to 3.6.0 may fail.

…and I couldn’t find 3.5.5 anywhere on the LibreOffice site.  Also recently released by LibreOffice is 3.5.6, but documentation is poor (many links go to 3.6, not 3.5.6) and it’s unclear as to whether this will install over 3.5.2 as 3.5.5 would do, as a version waypoint for those wanting an over-old path from < 3.5.5 to 3.6

So I uninstalled LibreOffice 3.5.2 from my Windows 7 64-bit SP1 PC with 64-bit Java JRE 7.5, then installed LibreOffice 3.6, but after an initial pause on first run, every attempt to launch LibreOffice failed with this “Unhandled exception: InvalidRegistryException” error…


…followed by this “Runtime Error! This application has requested the Runtime to terminate in an unusual way.” from the Microsoft Visual C++ Runtime Library:

Attempts to uninstall and re-install 3.5.6 or 3.6 again did not fix this issue, including after shutting down and restarting Windows.  Uninstalling the older co-installed OpenOffice (which worked fine with LibreOffice up to 3.5.2) did not make any difference, the same failure pattern remained.

I followed advice to delete my LibreOffice profile, i.e. the subtree within AppData\Roaming for LibreOffice, and that fixed the issue, which may have been linked to a language dictionary I’d added to Open Office.  I didn’t try a more refined fix (i.e. trying to isolate which part of the old profile was bad) as I didn’t need anything in the old profile; I did rename it away (while LibreOffice was completely closed, QuickStarter included) rather than delete it, in case I want to go deeper into this issue later.

Which let me get far enough to hit the Java problem (those who noted “Windows 7 64-bit with JRE 7.5 64-bit” may take a guess at the cause before reading my next post)

20 August 2012

Missed UAC Prompts May Silently Undo Installs

This is a more generic issue than what can go wrong with AVG installation, but not the usual user account rights issue.

Windows 7 makes UAC somewhat less obtrusive in various ways, which is generally welcome – but a side-effect can be to silently undo a software installation. 

What happens:

  • you start an installation
  • you aren’t prompted for admin permission
  • you leave the install process to run unattended
  • a UAC prompt pops up while you’re away
  • In Windows 7, this is now a discreet flashing item on Taskbar
  • the UAC alert is ignored, times out and assumes “no”
  • the installation is silently aborted
  • you think the installer completed OK
  • then you find your new software’s simply “not there”!

I notice this in particular with LibreOffice, where the expected UAC prompt only appears quite late in the installation process. A possible factor may be renaming the installation executable (e.g. from “Setup.exe” to “NameOfApp.exe”), as this may defeat Windows recognizing it as an installer requiring administrator rights, and thus prompting early for permission to continue.

LibreOffice is an object lesson in why Open Source may be more useful than just being free of charge.  When Oracle took over Sun and cut off the salaries of the Open Office development team, the code itself was beyond their reach – so the product could survive, as continued by the same coders working elsewhere.

One beneficial side-effect of leaving Sun, was a move away from Java, for which Open Office was something of a “poster child”.  LibreOffice is already faster to start up (even without the “quickstarter”) as a result.

AVG Vanishes After Cleanup and New Install

This is not the generic issues of user account permissions or missed UAC prompts; it’s something more specific to AVG’s installation tools.  Here’s what happens:

  • you uninstall AVG
  • you restart Windows if prompted to do so
  • you do some manual cleanup of AVG leftovers
  • you run AVG’s cleanup tool
  • you install AVG
  • AVG works fine
  • you shutdown and restart
  • on next Windows session, AVG has vanished

If you run the AVG cleanup tool after uninstalling AVG (and perhaps being prompted to restart Windows) just before installing AVG, then install AVG, you won’t have been prompted to restart Windows between the cleanup and the new install.  The cleanup works by seeding HKLM..RunOnce with an entry to uninstall AVG, which it does, the next time to shutdown and restart Windows – so it kills the new install you have just completed!

The bug may be that because the cleanup tool finds no active traces of AVG when it runs, it doesn’t see a need to prompt a restart of Windows.  It then exits, so is no longer there to note the new material added by the fresh AVG installation.

There also may be a bug in the fresh (in my case, offline installer for AVG Free 2012 32-bit downloaded 19 August 2012) installer, if that fails to look for and detect the RunOnce entry created by the cleanup tool.

I’m fairly sure of the mechanism of this issue, because while still in Windows after installing AVG anew, I spent some time in Regedit and noted the RunOnce entry there.  I’m also fairly sure this is not a malware effect, as the PC had not been online since formal scanning with a variety of antivirus rescue CD scanners (via Sardu) and other tools (via Bart).

However, a possible contributor may be the manual clean-up I did before running the cleanup tool and the new install.  This was in XP SP3, and my efforts were limited to deleting the AVG and MFAData subtrees in Documents and Settings, All Users, Application Data.  I did no registry cleanup (either manual or automated), nor did I clear other Documents and Settings or Program Files AVG locations (where leftovers are trivial compared to 100M or so for each of the two I deleted).  I did delete \$AVG from hard drive all volumes, and both old and new AVG installations were to a non-default path in C:\Program Files.

Also, before uninstalling AVG originally, I cleared Virus Vault and logs via the AVG UI, and during the uninstall thereafter, checked Yes to clear the Virus Vault, but not User Settings.  The latter are probably held in the small Application Data locations in the per-user subtrees, where I did not delete anything.  I also (heh, lot’s off “also”, all of which should be declared in case the bug hinges on them) deselect the “security toolbar”, opt out of sending info to AVG, delete unwanted desktop shortcuts and relocate Start Menu shortcuts to a different folder within the same All Users, Programs.

So, steps to avoid this issue:

  • shutdown and restart after running the AVG cleanup tool
  • check HKLM..RunOnce is clear
  • then do the fresh install of AVG
  • shutdown and restart Windows
  • check HKLM..RunOnce is clear
  • check AVG is present and running
  • re-check AVG is present and running after further startups

Some of the re-checking should be redundant, but I’m reluctant to turn my back on it again!

Perhaps the time has come to use Microsoft’s free antivirus instead.  I’m less keen on Avira (the Rescue CD is prone to false-positives) or Avast, but they’ll have their proponents too.  I’ve used and supported AVG for years, but am getting fed up with frequent large new versions, unwanted Do Not Track and “PC Tuneup” stuff (including the especially-unwanted “registry cleaner”) and the way recent versions push users into using these unwanted items.

20 November 2011

C-Net’s Downloader Pollutes “My Documents”

You may have noticed C-Net have started using a downloader stub when you download software from them.  The stub adds no value that I can see (it claims to be “more secure”) and tries to push unwanted bycatch, such as browser toolbars etc.

So far, so normal and nasty, but there’s something else that makes this totally unacceptable IMO – it changes the location where your download is saved, ignoring your browser’s settings and providing no UI to see where this is beforehand, or change it.

And where does it save the download?  The Windows Vista/7 Downloads shell folder?  No; in “My Documents”.  So now you have infectable incoming code dumped into your “data” set, where it will pollute your data backups too.

I was wondering why I was seeing so much Incredimail, Babylon Toolbar and other junk on client systems – now I know why.  What I now know, is that I must also clear these code downloads from the data set.

4 October 2011

SkipRearm Setting for SysPrep Failure

Technorati Tags: ,

Here’s how it goes; you have an un-activated Vista or Windows 7 reference system ready for SysPrep and .WIM harvesting, but SysPrep fails.  You search, and find articles that mutter about adding a “SkipRearm” setting to an “answer file”, but get stuck there if you don’t know how to apply an answer file.

Fortunately, there’s a simpler fix that I found and tested for Windows 7, and it works.  For Vista (which I didn’t test)…

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurentVersion\SL\SkipRearm = 1

…and for Windows 7 (as tested OK):

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\SoftwareProtectionPlatform\SkipRearm = 1

The nice thing is, you don’t have to fiddle with “answer files”, or .WIM mounting and manipulation in WAIK.

This fits with the simplistic way I use .WIM imaging; I use only WinPE 3.0, ImageX, and the GimageX GUI wrapper for convenience.  My WinPE 3.0 is standard other than the addition of GimageX and ImageX, and a setting to prevent the WinPE boot from falling through to boot the hard drive if no key is pressed (sorry, no link for that).

When building a system, I partition via BING, format the prospective C: to NTFS via WinPE, then apply the .WIM, so I have a baseline installation that when booted, will resume Windows Setup as part of what SysPrep did prior to the creation of the .WIM image.  I do the first boot OFFline, and kill the duhfault setting to automatically activate Windows. 

Then I update and install free software to taste, until the new PC is generically fully set up.  I use BING to image the C: partition for safekeeping (in case SysPrep screws up), then run SysPrep and Generalize the new PC.  I then boot WinPE to capture C: as a new and updated .WIM, then I boot BING to restore the partition to the state before SysPrep was run.  At this point I can apply client-specific changes, activate Windows, and ship the new PC.

SysPrep does not maintain undoability, and tends to screw up.  When it does, you can be left with no bootable reference system and no usable new .WIM, so I again stress the need to image-backup C: before SysPrep.  If you’ve done that, you may prefer to restore that image rather than wade through and clean up after SysPrep’s effects.

Key safety

One of the things you want to avoid when working with what you hope to harvest as a reference .WIM, is inadvertently activating the build, especially with the wrong product key:

  • Disable the “automatically activate” setting
  • Keep new PC offline from build until first backup image of C:
  • Re-check “automatically activate” setting before going online
  • Do the “image backup, SysPrep, restore C:” sandwich
  • Check the current key before activating
  • Activate before shipping as new PC

When I tested SysPrep with SkipRearm, I did not enter a product key when prompted, and used Nirsoft’s Produkey tool to check the key.  This showed a key other than that of the client, so SysPrep had stripped that OK, and presumably fallen back to some previous or fake key.  When I restored the pre-SysPrep BING partition image as C:, this showed the expected client’s key, as I’d entered when originally starting the build from the previous .WIM

Final tip; if/as SkipRearm doesn’t reset the full grace period for activation, you may want to minimize the days spent between restoring the previous .WIM and capturing the next one.

1 October 2011

Mint/Linux, Sandy Bridge, Blank-screen Intel Graphics

Mint 11.4 64-bit installs on Cold Lake H67 Sandy Bridge motherboard PC OK, hard drive boots to grub2 OK, but Mint boots to a black screen with no mouse pointer.  The OS is running OK, you just can’t see anything – or almost; a close look at the screen shows a pixel-flickering red line down the left edge and a solid short horizontal white line top left, when a 17” CRT is used. 

The problem appears with LCD screens also, and is variable; some boots may be OK.  At present, all 3 of 3 new PC builds have done this on first Mint hard drive boot.

The fix: Plug another monitor into the other graphics socket.  The desktop will immediately appear on both screens; you can then unplug the second screen and it will work OK (at least for that booted session).

Cold Lake motherboards use the H67 chipset, which in turn interfaces the Intel GPU built into the processor, to DVI and HDMI dual display outputs.  Typically I use a DVI to VGA adapter to take the DVI’s analog signal to 17” CRT or more modern LCD, leaving the HDMI unplugged, though from memory I recall similar mileage when a digital-signal LCD was plugged into DVI, or HDMI via HDMI to DVI “pigtail” adapter.

It seems like Linux can’t figure out what display signal to use?

If previous Ubuntu mileage is anything to go by, the “fix” is prolly to wait until the next OS release in the hopes it will use a newer Linux kernel that has a clue about “new” hardware.  Bah!  Still, that should be due later this month, so let’s see how it goes.

11 October 2010

Robot Drivers and Driving Tests

If we require training and licensing of humans to fly aircraft and cars, then how do those standards apply to software that pilots these things for us?


A key is situational awareness. You wouldn't easily give a blind pilot a flying license, yet effectively that is what Airbus tried to do with an engine control system that "landed" a test plane in a forest. It's one thing having a robot hospital cart that negotiates around people's ankles at walking speed, quite another to do that at road speeds, or while attempting to keep an airliner within its flight envelope.


Perhaps the human pilot or driver is expected to remain in control, on standby and ready to override the robotics? Good luck with that, as attention wanders and distractions take the foreground in the human's mind.


I see Google's already had cars driven by software logic on public roads. I wonder what the traffic cops would have to say about that?

7 September 2010

Driver Cure or Driver Curse?

If you'd just dropped into the PC world last week, you'd think all software was perishable and had to be continuously refreshed. Must always have the latest version BIOS, drivers, etc.


This attitude runs counter to an older wisdom, that the first question when something goes wrong, is: "What changed?" With this in mind, the last thing you want is vendor-driven changes to your code base; in fact, for a critical working system, you want no changes at all.


The logic behind all this is contradictory...



  • Software vendors make mistakes, requiring software repairs ("patches" or "updates")

  • This happens so often, you may not be able to keep up with the constant flow of updates

  • So it's best to let the software vendor push updates whenever they see fit


This boils down to: Trust software vendors to push changes into your code, because they fail that trust so often you can't keep up with the pace of quality repair required.


So, should you always patch, or never patch? Or sometimes patch? If "sometimes", then on what basis do you use to decide what needs patching?


Balancing risks


Some code is so critical, you may consider it too risky to change, e.g. BIOS and device firmware, device drivers, core OS code, and code that is running all the time and can crash the PC if it goes wrong.


Some code is so exposed to arbitrary unsolicited material, you may consider it too risky to leave unpatched, for fear that malware may exploit defects in the code to attack your PC.


Code should never fall into both of the above categories; if it does, you're probably looking at really bad software design. For example, integrating a web browser so deeply into the system that it's indivisible from the system's own UI, would be a bad design decision. Or consider a service so critical to the system's internal functioning that the OS shuts down the whole PC every time the service fails, that is waved at the Internet on the basis it's "networking"; that would be a really bad decision (Lovesan vs. RPC, remember?).


Trust me, I'm a software vendor


The two reasons not to trust a software vendor are incompitence and perfidity. A vendor who claims you "must" leave your system open to a constant stream of fixes, has declared themselves incapable of writing code that can be trusted to work properly.


And frankly, when even "legit" vendors hide deliberately user-hostile code within their products, set to automatically deny you service if its logic considers your license state is invalid (product activation) or distribute rootkits within "audio CDs" (Sony), I'd not trust any vendor's ethics.


Finally, even if you trust the vendor's ethics, you have to look at the mechanics of code distribution. Fakeware abounds, so when a third party claims to serve you fresh code from the vendors you trust, you have to ask yourself how trustworthy is that third party?


You also have to ask why you'd trust a particular software package. Open source advocates would say it's because you can read the source code yourself, or at least feel safer in that others have done this on your behalf. Closed source advocates would say it is unrealistic to read source code yourself, and instead would point to pre-deployment testing that would pick up unwanted behavior before the code was used in the real world.


Patches and updates change both of these equations, because now the code you read and/or tested, is no longer the actual code that is running. Any patch may add unwanted behaviors that favor whoever pushed the patch into your system. For the same reason, you should avoid software that stores "your" settings on the server side rather than on your PC (e.g. Real Player, many Instant Messaging apps) and "Privacy Policies" and End User License "Agreements" that state "these terms can be changed whenever we see fit", as so many do.


The race to patch


There's a race between freshly-released malware, vs. your antivirus scanner that protects your system. When a new malware is found, the antivirus vendor analyses the code to work out how to detect it, then how to safely remove the code, then that logic is packages as an update that your PC's scanner pulls to update your protection.


Compares this to what happens when a new vulnerability is patched. The malware coders can compare pre- and post-patched code to isolate the fix, then work out what the unfixed code did wrong, and thus how to attack that code. The exploit code is then packaged into malware prepared earlier, such as a downloader stub, and that's pushed into the wild.


Notice the similarities between these processes, i.e. recognizing and removing malware compared to extracting and exploiting code defects from studying patches?


If you rely on resident antivirus to protect you, then you are betting on the av vendor to beat the malware in the race. By the same token, you may expect malware coders to be fast enough to exploit your edge-facing code before the patch arrives to fix the defect. Hence the manic rush to patch, for fear of prompt exploit.


It's actually a bit worse than this, for two reasons. Firstly, sometimes it's the malware folks who find and exploit defects before the code vendor learns about these and fixes them. Secondly, software vendors have to ensure their patches don't break any systems, whereas a malware coder just wants it work enough of the time to spread, and doesn't care if it breaks other systems in the process. Less rigorous testing means "faster to market", right?


Self-spreading malware can also spread faster from more systems, and thus beat the patching or updating processes to the punch. Malware can be delivered in real-time via pure network worms, or links to servers that are themselves updated in real time. Often the malware that enters the system is just a downloader stub; it only has to last long enough to pull down the "real" malware, which can replace itself in real time as well.


Edge-facing software


With all this in mind, you can see why one would want to patch edge-facing software as soon as possible. Examples include web browsers, Java, Acrobat Reader, Flash and media players, and anything that is constantly exposed to the outside world, such as software that waits for instant messages or "phone" calls.


The best solution is to remove that edge-facing software, and thus the need to patch it. Do this whenever you don't need that software, when the software or its vendor are too flaky to trust, or when the update process itself is something you want to avoid.


For example, you may catch a vendor trying to shove new edge-facing software as "updates", even when that software is not present and therefore doesn't require patching. That's how Apple used to push Safari to PCs running iTunes or QuickTime, until they were pressurized to stop.


For another example, a vendor may decide you don't need to be asked before updates are pushed, or even told when this has happened. And when you look at that vendor's updater, you find it running as multiple scheduled tasks; then when you look at the details, you find a task that appears to be run once a day, is actually repeatedly run every hour throughout the day. That's the equation with Google, and why I would avoid any edge-facing Google software.


If you can't avoid edge-facing software, then you can protect yourself in two ways; by updating it as soon as possible, and/or by choosing such obscure, small-market-share products that they aren't likely to be attacked. The latter is like living in an unlocked shack in the countryside; that works not because shacks are "so secure", but because there are so few attackers around.


Driver Cure or Driver Curse?


So now we come to Driver Cure, which is a third party product that pulls in the latest versions of your device drivers. Would you want this? I'd say no, for two reasons.


Firstly, device drivers are code that runs so "deep" in the system, that any mistakes are very likely to crash the entire OS, leaving the file system corrupted, data files unsaved, etc. Device drivers usually run all the time, so bad code may prevent the system from being able to boot or run at all, even in Safe Mode. So I definitely don't want unexpected changes to this code, any of which may cause the system to stop working.


Secondly, device drivers are not edge-facing, so the risk of explosure exploit should not be high. That means less reason to patch in haste.


Thirdly, if malware were to be integrated into the system as deeply as a device driver, it would have considerable power and be very hard to remove. So we'd want to know a lot more about third party software that inserts "drivers" into the system.


The "Driver Cure" folks also push XoftSpy, which was one of several hundred fake anti-spyware scanners, until they supposedly "went legit". As such, sites and blogs may no longer call XoftSpy "malware" for fear of being sued; we may instead consider it as a legit antispyware that isn't very good at what it does, and costs money where better products are free.


So, in spite of "reviews" like these, I would avoid Driver Cure and anything else from that particular software vendor or distributor.