16 July 2013

Hard Drive to .VHD

Let’s say you have an XP PC that died, and you want to run that installation in a virtual machine (VM).  The first step will be to harvest that installation into a virtual hard drive.  Different virtualization host software supports different file types for virtual hard drives, but I’ll be using the .VHD standard, which should work in Virtual PC, VirtualBox and VMware Player.

Preparing the physical hard drive

The hard drive’s from a failed PC, so step one is to be sure of the hardware and file system.  The drive is taken out of the dead PC, and dropped into a known PC that is set to boot off safe maintenance OSs (mOS) and tools such as Bart, WinPE, Sardu, BootIt Next Generation, etc.  Do not allow the hard drive to boot!  You will also want to connect another hard drive with enough space to swallow what you will harvest, and make both connections using internal S-ATA or IDE rather than USB, for speed and SMART access.  FAT32 limits size of a file to 4G, so if you are using Disk2VHD or capturing a .WIM image, your target drive file system should be NTFS.

First, I boot my Bart DVD and from there, launch HD Tune, and look at the SMART details.  You can use any SMART tool you like, as long as it shows the raw Data column for these four attributes: Reallocated Sectors, Reallocation Events, Pending Sectors and Offline Uncorrectable sectors.  For all of these, the raw data should be zero; if it does not, evacuate the stricken drive first as files, then as partition image, before doing anything else, including surface scan and other diagnostics.  Don’t trust SMART tools that only show “OK” or “Fail” status, that’s next to useless.

If the hard drive is physically OK, then check and fix the file system from a compatible mOS.  If the volume to be salvaged is NTFS, then the version of mOS should be same or higher as the OS installed on the drive.  So you can use WinPE 4.0 for any Windows, Windows PE 3.0 for Windows 7 and older, WinPE 2.0 for Vista and older, and as we’re after XP in this case, we can use any of those plus Bart, which is built from the XP code base.  Run ChkDsk /F and capture the results, either via command line redirection (you may have to press Y if nothing happens, in case there are “Do you want to…?” prompts you can’t see) or by capturing text from the command window.

Next, you will want to retrieve the Windows product key from the installation, in case that has to be re-entered to activate Windows when booted in the different (virtualized) PC.  I use Bart as the mOS for that, along with two free add-ons.  The first is the RunScanner plugin for Bart, which “wraps” the inactive installation’s registry hives as if that installation had booted these into action, so that registry-aware tools will see these as native (unfortunately, there’s no equivalent to Runscanner for Vista and later).  The second is ProduKey from Nirsoft, which reads the keys you need.

The final preparation step is to zero out free space so that these sectors are not included by certain types of harvesting process, as they will bloat up the size of the .vhd you will eventually create.  You can download SDelete and use that from Bart; the –c –z options will create a file full of zeros to fill the free space, and then delete the file.

Harvesting as loose files

In Windows 9x, if you copied every file to a new hard drive and created the correct MBR and PBR contents, Windows would boot and run just fine.  This is no longer true for NT-based OSs such as XP, but you may still want access to loose files, and if the drive is failing and dies soon, that may be all you get - and more useful than a partial partition or installation image.

I use Bart as the mOS for this, finding it easier to work from a GUI shell than command line.  I have Windows Directory Statistics integrated into my Bart, and use that to compare file counts to be sure I haven’t left anything out.

Harvesting as partition image

There are various partition managers that can boot off CD, and the one I’ve been using is BING (Boot It Next Generation).  If you use BING in this way, it may show an installation dialog when it starts; cancel that, as you don’t want to install it as a boot manager.  Then go into partition maintenance mode and work from there.

You can also boot BING within the virtual machine, from physical disc or captured .ISO, as long as the VM’s “BIOS” is set to boot optical before hard drive.  BING can be used in this way to resize partitions within .VHD, but cannot change the size of the “physical hard drive” as seen within the VM; use VHDResizer for that, from the host OS.

BING can also save a partition or volume image as a set of files, of a maximum size that you can select to best fit CDRs, FAT32 file system limitations, etc. and that’s quite an advantage over .WIM and .VHD images and the tools that create them.

When in BING, it’s a good idea to display the partition table, and take a picture of that via digital camera, in case there are any “surprises” there that kick in when attempting to boot the virtual machine later.

I can also use DriveImage XML as plugged into Bart, as a tool to create and restore partition and volume images; unlike BING, you can browse and extract files from within the image it creates.  There are probably other backup tools that can do the same, but make sure they have tools to work from bare metal, and that these tools work within virtual machines, as Bart and BING can do.

Harvesting as installation image

You can use Microsoft’s imaging tools to create a .WIM; a file-based partition image with the OS-specific smarts to leave out page file, hibernation file, System Volume Information etc.  Because in this case the original PC is dead, we don’t have the option to generalize the installation via SysPrep; a relief in a way, as SysPrep is so rough you’d want some other full backup before using it.

Access to these imaging tools was difficult at best, in XP and older versions of Windows; you had to be in corporate IT or a large OEM to legitimately get these.  You can now download what you need from Microsoft for free, though it’s a large download, and in any case the full OS installation disc can now boot to a command line and function as a WinPE.

In Vista and 7, you’d use a command line tool called ImageX to harvest (capture) to .WIM and apply .WIM to new hard drives, and can add a free 3rd-party tool called GimageX for a less mistake-prone UI.  In Windows 8, you’d use the DISM command instead, and I’ve not sought or found a GUI front-end for that; instead, I’m using batch files to remember the syntax involved.

Harvesting directly to .VHD

There are said to be many tools for this, but I’ve only found one; Disk2VHD.  There’s also something called P2V, but appears to be part of a costly software product aimed at the corporate IT sector, and may apply more (only?) to Microsoft’s Hyper-V virtualization technologies.

Disk2VHD boasts the ability to image partitions that are in use by the running OS, via that OS’s shadow copy engine.  Unfortunately, that is the only way it can work – so it will not run from Bart or WinPE.  You are obliged to boot a hard drive based Windows to host the tool, exposing the hard drive to be harvested to whatever that OS may do. 

That’s too risky for at-risk hard drives, as Windows tends to fiddle with every hard drive volume it can see.  WinME and XP are the worst offenders, as they enable the System Restore engine on every hard drive volume detected and immediately start writing to those file systems.  Al least Vista, 7 and 8 don’t do that!

It’s important to remember that Disk2VHD captures entire hard drives, not just partitions or volumes, even though the UI implies the latter by allowing selection of partitions and volumes to be included.  For example, if you have a 64G C: on a 500G hard drive and you deselect all volumes other than C:, you will create a virtual hard drive 500G in size with a 64G C: partition on it, the rest being left empty.  You may have hoped for a 64G drive filled with C: but that is not what you’ll get.

Size matters

Guess what the sizes of these various harvestings will be, compared to original drive?  Then check out the results of doing this for real, or a mature in-use XP installation with shell folders relocated out of C:

  • 500G - capacity of original hard drive
  • 30G – size of original NTFS C: partition
  • 12.0G – size of files harvested
  • 13.5G – size of files harvested, as occupied space on FAT32
  • 7.48G – size of BING image file set
  • 4.69G – size of .WIM image created from WinPE 3.0 and ImageX
  • 12.6G – size of .VHD file as created by Disk2VHD after zeroing preparation
  • 10.8G - as seen within .VHD via Bart boot within the VM

Note that .VHD are ignorant of the file system and OS within; this is why it’s inappropriate to blame the tool’s creators when a harvested installation fails to boot within a VM.  A significant effect of this is that any sectors containing anything other than zeros, will be included as explicit blocks within the dynamic and differencing types of .VHD, which would otherwise have saved host space by leaving out empty sectors.  The .VHD manages space in large blocks, so this effect is made worse; if any sector in a block is non-zero, the whole block is added to the .VHD

Of these, the .WIM is the most compact (I capture using the strongest compression offered); then the BING image file set.  After that, things are pretty much as you’d expect, though even with zero optimizing preparation and before using the .VHD, the (dynamic) .VHD file is already significantly larger than the files it contains.

Creating a new .VHD

If you used the Disk2VHD tool, you already have your .VHD populated – but it may not be a physical size (as seen from within the VM) that you’d like.  In theory, if the partition size is limited, the “physical” space outside that should never be written to, this never contain anything other than zeros, and thus never add size to any dynamic or differencing .VHD file on the host.  In practice, you may prefer to constrain the physical size of the virtual hard drive, especially if choosing the fixed type of .VHD that always contains every sector as-is, regardless of content.

When creating a new .VHD you set the capacity of the hard drive it pretends to be, and whether the .VHD will be of fixed or dynamic type.  The fixed type is the .VHD equivalent of a fixed-size pagefile; if it’s created as an unfragmented file, it should perform better than one that grows in fragments over time.

Either way, your host volume should have enough free space to contain the full size of the .VHD’s internal capacity, or at least that of all partitions and volumes within the .VHD you intend to ever use.

You can also layer .VHD over each other, with a fixed or dynamic .VHD as the base.  Each layer above that will be a differencing .VHD, valid only as long as the lower layers do not independently change.  Both differencing and dynamic .VHD use the same storage model, which is like a “super-FAT” chain of large blocks that explicitly exist only if any contents within have changed, relative to the layer below.  Under the base layer is assumed to be all zeros, so a block that has never contained anything other than zeros, need not explicitly exist in any differencing or dynamic .VHD

That means every .VHD layer may grow as large as the size of all partitions and volumes within it; host free space should be available accordingly.

Because changes to an underlying .VHD layer will invalidate all layers above, they are generally used in two ways.  You can have a base image that is kept read-only so that it can never change, and this can be the “installation image” over which multiple VMs can run, each with their own changing differencing .VHD; this is how XP Mode is set up.  You can also use a differencing .VHD above that, which is considered disposable until it is merged with the .VHD below; you may use that for guest accounts, malware investigation and other testing, kiosk use etc.  Virtual PC may use this as “undo” disks, though these use the .VUD rather than .VHD file type.

To merge .VHDs, you need enough free disk space on the host for the output .VHD that could be as large as the .VHD being merged. in terms of workspace required.  To compact a .VHD (i.e. discard any blocks full of zeros in dynamic .VHDs) you need enough free host disk space to create the new .VHD; these considerations make .VHDs costly in terms of disk space and hard drive head travel to partitions beyond where they are stored.

Populating the new .VHD

If you used partition imaging tools like BING or DriveImage XML, or installation imaging tools like ImageX from WinPE, you will need to write these images to the new .VHD you created above.  You may also need to move contents from one .VHD to another, if you need to change the base .VHD type, or don’t want to use VHDResizer to change the size of the physical hard drive contained within the .VHD

One way to do this, is by using these “real” bootable discs within the virtual machine, either by booting the VM from the physical disc, or by capturing the relevant .ISO and booting the VM into that.  If you can’t “see” host hard drive volumes within the VM then the materials should be copied into another .VHD that is attached as an additional hard drive before starting the VM session.  You can do that in suitable host OSs (e.g. Windows 7) by mounting the .VHD for use as if it were a native hard drive volume; otherwise you may need a VM that can see outside via network shares etc.

4 July 2013

Build One Skill in 2013? Virtualization

My life’s been fuller for the last five years and will hopefully continue to be, so I generally don’t devote such huge amounts of time to particular interests.  Still, whatever time I do have to build new skills, I’ll probably spend on virtualization, i.e. running one OS within another.  That competes with Windows 8 for attention; I’ve slain the closest crop of Windows 8 dragons, but that’s another story!

I have three “egg-eating snake” client jobs at the moment, i.e. with jobs that can’t be fully gulped down in one session.  If they’re reading this, they will recognize themselves; common to all three cases, is (or may be) virtualization.

Client A had an XP PC that failed at the motherboard-and-processor level, with several installed applications that couldn’t be reinstalled on a modern Windows PC for one reason or another.  The hard drive was dropped into a tested-good used PC of similar vintage, various dragons were slain etc. until all was working, but delivery was postponed while a courtesy laptop was in use.  During this time, the PC became slightly flaky, then increasingly so; hardware tests were initially OK, but over several months, it has deteriorated to the point it is clearly unfit for use.

By now, a large amount of work has gone into the system; how can one preserve that value?  I could look for another set of XP-era hardware to host the hard drive installation, but it may be better to harvest that into a .vhd and run it virtualized within a modern PC.  Once in a .vhd, it should survive being ported from one PC to another, as long as the same virtualization solution is used to host the .vhd – but will the installation survive the conversion, and what should the host be?

Client B is already happily using XP Mode on Windows 7 to keep old software alive, but a crisis is looming because the .vhd is growing in size to fill the host’s C: space.  XP Mode is built by combining a pre-defined read-only .vhd with a differencing .vhd; these combine to produce what XP Mode sees as its “C: drive”. 

Within XP Mode, this virtual “C: drive” is 126G in size, with only 5G or so used.  But outside in the host OS, while the parent .vhd is a comfortable 1.1G in size, the difference .vhd is an absurd 19G+, leaving under 1G free space.

Management of this issue is constrained by .vhd decisions cast in stone by XP Mode, and it’s not clear whether the installation will survive changes to the .vhd (e.g. merging the parent and child into a non-differencing .vhd, transferring contents to a fresh smaller and/or fixed-size .vhd, etc.).  It’s also unclear whether there’s any predictable maximum size limit to this .vhd bloat, and thus whether a one-off enlargement of the host C: partition (at the expense of extra head travel to other volumes) will permanently fix the problem.

Client C has a new laptop with a geographically-broken warranty/support trail, that has an edition of Windows 7 installed that does not match the OEM sticker.  After a failing hard drive was replaced, Windows demands to be activated, and this fails with both the placeholder key used within the installation, or that from the OEM sticker.

So he has an “upgrade opportunity” to choose (and alas, buy) whatever version and edition of Windows he likes, and this choice is complicated by the need to run an important “needs XP” application that hasn’t yet been tested within XP Mode or other virtualization.  Which virtualization host should he use?  That choice affects that of the OS; Windows 7 Pro for XP Mode (the solution in which I have the most experience), Windows 8 Pro for Client Hyper-V (may be faster, may integrate less well, needs XP license) or client’s choice of cheaper 7 or 8 if using VirtualBox or VMware Player (both of which will also need an XP license).  Where are such licenses going to (legally) come from, in 2013 onwards?

Common to all three clients, is a need for virtualization skills.  I need to be able to convert from “real” installations in hard drive partitions to .vhds, get these working in at least one free virtualization host, and be able to manage file size and other issues going forward.  XP Mode integrates well and includes the XP license, but dies with Windows 7 and needs the costlier Pro edition; it may be better to abandon that in favor of VirtualBox or VMware Player, which aren’t chained to particular editions and/or versions of Windows.  If those also work from Linux, seamlessly hosting the same .vhd installations, then that would be a deal-clincher; I could skip bothering with Windows 8’s Client Hyper-V altogether.

There are more details (and especially, links) on these scenarios in this recent post.

Why I (Don’t) Blog

If you find yourself in the situation where you have to present a non-trivial amount of information to people, success may depend on the method of communication you choose to use.

Face to face

Rich impression, poor retention, and very demanding of resources!  When someone’s taken the trouble to be physically present, you should put them first, above lazier contacts such as phone calls, and neither of you will get any other interactive work done during your meeting.

So if there’s a non-trivial amount of contact to deliver, rather send that via fax or email ahead of time, so that the meeting can be pure live interaction rather than “death by PowerPoint”.  Make sure you allow enough time for this pre-meeting information to be processed; in the days before email, I’d budget a day per fax page.  Need folks to read a 5-page fax before a meeting?  Send it at least 5 days before.  Not enough time?  Edit the content down to fewer pages.

Telephone

The pits; in my opinion, the “voice telephone” model of paid-for real-time voice-only calls should have died in the 20th century, along with telegraph Morse code.  Very intrusive real-time demands, no content logging or retention for either party, and worse communication effectiveness than face to face (clothespeg-on-the-nose sound due to poor frequency response, no non-verbal gesture/expression cues, etc.).  And then there’s “telephone arm”.  What was the other idea?

I’d also refuse to do any significant interactions over the phone, especially with larger entities that “record calls for quality purposes”.  If things escalate to a courtroom appearance several months down the line, guess who’s going to look like the unreliable witness?

Skype

Ahhh yes, now we’re getting somewhere.  No cost (at least if you’re on ADSL rather than some ghastly rapaciously-priced mobile Internet access), text chats are logged, files can be sent and links or text pasted in and out, and there’s sound and video available too.  It’s real time, but “sticky”; a pending chat is more likely to grab your attention when returning to your system than an emal would, and can be continued at any time.

Email

Best of the lot; excellent logging and reusability, the best way to send content with the lowest risk of “glaze over” (which pretty much kills phone and face-to-face for anything substantial).

Blog

If you find yourself having to say the same thing to different folks again and again, then it’s better to blog it and say “yonder post refers”.  That is why I started blogging, after getting that “stepping on ants one by one” feeling when I was active in usenet technical news groups.

But I find it much harder to write when I don’t have a specific audience in mind, and that is why I blog so seldom.  In fact, many of my blog posts are generalizations of content originally written for a specific person.

Living With(out) XP

Microsoft support for Windows XP SP3 (the last supported SP level for XP) is due to end April 2014.  By “support”, I don’t mean the help line center, but the testing and repair of exploitable code defects, as well as perhaps technical assistance to software and hardware developers.  Articles on vulnerabilities won’t state XP may be vulnerable; they will simply ignore XP as if it never existed. 

New or revised hardware will likely not have drivers for XP, and new software will start to exclude XP from the list of supported OS versions.  This is likely to be more severe than it was for Windows 2000, because there’s a far bigger difference between the 2000/XP and Vista/7/8 code bases as there is between 2000 and XP, or within the Vista to 8 spectrum.

The trouble is, many of us still have software that “needs XP”; that gulf between the 2000/XP and Vista/7/8 code bases makes it far harder for XP-era software to work on newer Windows versions, even as these versions attempt to support such limitations in various ways.  Some of this software may have hardware dependencies that can no longer be met, or delve “too deep” into OS territory for virtualization to manage; examples include licensing protection code, close relationship with hardware (e.g. DVD burning software), and deep-system stuff like diagnostics, data recovery or anti-malware activity.

There are different ways to accommodate old software:

0)  Keep (or find) an old system

If you already have an XP PC running what you need to use, then you could keep that – as long as it lasts.  Monitor the hardware’s health, by:

If the PC fails, you can try and repair or replace it, while preserving the existing installation.  Back that up, as the first boot may well fail in various ways; BSoD STOP crash, lockup, demand to activate Windows and other vandor feeware, protracted (and possibly doomed) search for new drivers, etc.  If you can get the hard drive installation working within tested-good hardware, that’s probably the best result from a compatibility perspective. It may be your only choice, if you lack installation discs, or pre-install material, and/or product keys etc. for your crucial “needs XP” software.

XP-era kit is quite old now, and reliability is becoming a problem – like buying a used car with hundreds of thousands of kilometers on the clock, to start a road trip across the Outback.  Digital systems are based on analog parts, and at the volts-and-microseconds level, slew times can grow with no apparent problem until they fall outside the digital yes/no detection criteria.  Metal-to-metal contact points get corroded, and you’ll often find an old PC either works fine, or not at all; maybe it “just needs this card to be wiggled a bit” to get it working again, etc.  Welcome to Creepyville, you won’t enjoy your stay.

An alternate approach may be to harvest the hard drive installation into a virtual hard drive (.vhd file) and try getting that to work as a virtualized PC; jump ahead to (4).  You’d run the same “different hardware” gauntlet as dropping the hard drive into a different PC, with added virtualization-specific issues.  So far I’ve had no success with this; it’s been a non-trivial mission to attempt, and I’ve only made the attempt once so far – but maybe better tools will help.

1)  Build a “new” old system

In other words, if an app needs Windows XP and can't run on anything later, why not simply build a "new" XP system?  That would give the most compatible result, and should run well.

In practice, XP doesn't properly embrace new hardware and/or use it effectively, degrading the value of the hardware.  There's no (native, or in some cases "any possible") support for:

  • AHCI, which reduces hard drive performance
  • 64-bit addressing, limiting memory map to 3G or so
  • USB 3.0, so you're limited to USB 2.0 performance
  • 2T+ hard drive capacity
  • Touch screens

Some of the above are not yet relevant (USB 3.0) or may never be relevant (touch screens, for which Windows 8 is designed) but others (AHCI, 4G+ memory access) already bite deep.  The 2T hard drive limit is also currently impossible to breach via internal laptop hard drives, but may become relevant for shared externals.

So building a "new" XP-only PC is something of a dead end, suitable only to shrink-wrap a crucial and irreplaceable application, on the understanding that the system won't be safe for general use (any Internet access, perhaps even inward file transfers via USB etc.).

Availability of new XP licenses is an issue, given that it has long been off the market as a saleable product.  In theory, Home costs the same as the cheapest non-Starter edition on later Windows, and Pro the same as corresponding versions of 7 and 8, but in practice you'd struggle to find legitimate stock.

There’s also uncertainty around the activation of Windows XP after support ends.  At the time the activation system was rolled out, we were assured that when Microsoft lost commercial interest in activated products, the activation system would be disabled so that software could be used without hassles, but it remains to be seen whether that promise is kept.  Worst-case scenarios are where new XP activations become impossible, possibly including those triggered by WGA (“Windows Genuine Advantage”), or even where existing installations are remotely disabled.

2)  Build a new system, app must "take its chances"

There's an element of "app must take its chances" involved in all solutions other than "build an old system" and dual-booting as such.  This is the most extreme case, where one simply ignores the application, builds a no-compromise modern system, and then hopes the old application can be made to work.

There are settings within Windows to treat applications as if they were running in older particular versions of Windows, but these don't handle every case successfully.

The main point of incompatibility arises from changes introduced with Vista, which attempted to curb the rampant malware threat.  Things that applications were formally allowed to do with impunity, are no longer allowed, and some apps aren't satisfied with fake compliance with what they demand to be able to do.  It's like moving from an unlocked farm house to a gated community!

Less likely to be a problem, is the change from 32-bit to 64-bit OS, as required to access over 4G or "memory" (i.e. RAM plus swap space plus allowance for non-memory use within the map).

Some software can break when the hardware is unexpectedly "big", i.e. "too much" RAM, hard drive space or processing speed, independently of the 32-bit vs. 64-bit thing.  But most application software should not have a problem with either set of issues as such, though there are some other safety changes that stealthed in during the change to 64-bit that could hurt.

3)  Build a dual system

It's possible to build and set up a PC to run either one version of Windows, or another.  Only one can be run at a time, the hardware has to be compatible with both, and each OS runs natively, as if it had the whole system to itself.

Hardware compatibility becomes something of an issue; you either stunt the new OS, or you have to manually toggle CMOS settings to match the OS you're booting so that the new OS can run at full speed in AHCI mode.

You should get as good a result as (1), but have the same problems finding a new XP license.  Both OSs have to have their own licenses, which is costly, but new OS can be a cheaper non-Pro edition.

There are some issues where each OS can trample on the other, if the C: partition of the inactive OS is visible.  I've dealt with such issues before, though not yet with Windows 8, and may have to adjust the specifics for that and/or if the free boot tools used previously, have changed version and/or availability. 

In essence, I use a boot manager to hide the “C:” OS primary partition that is not booted, so that each OS can see its own primary as “C:” and doesn’t mess up System Volume Information etc. on the other OS.  All data and other shared material is on logical volumes D:, E: and F: on the extended partition, leaving one free slot in the partition table for boot manager and/or Linux, etc.

4)  Virtualize one system within the other

In practice, that means virtualize the old within the new.  The reverse is possible in theory, but will work less well as the new OS won't get the resources and performance it needs, so your "new" PC would run like a dog in "new" mode.

This is something of a gamble, because not all applications will work when the old OS is virtualized.  Anything that needs direct access to hardware, and/or access to specific hardware, is likely to fail.  That shouldn't apply here, but may, especially if the application fiddles with hardware as part of attempts to prevent unlicensed use - another facet of how feeware sucks.

The choice of parent OS becomes complicated, i.e. from...

In all cases other than XP Mode within 7 Pro, you would need an XP license, as you'd do for solutions (1) or (3). 

If this approach works, it gives a more convenient result than (3) because you can at least run old and new OSs and the apps within them at the same time.  Interaction between the two systems may be as separate as if they were to physically separate PCs on a LAN, unless the virtualization host can hide the seams, as XP Mode may do to some extent.

I've had some experience with 7's XP Mode, but as yet none at all with 8's Client Hyper-V virtualization host.  So while 8 Pro and Hyper-V are more attractive going forward (Hyper-V is a more robust and faster technology than that of XP Mode), they are more of a jump into the dark for me at present.

3 July 2013

Space Invaders

A common hassle is software that hogs disk space on C:, and often the comment is made that “modern hard drives are so big, it doesn’t matter”.  The worst offenders are junk software (hello, Apple) that not only chews disk space, but has no settings to path that store to somewhere else.  This is something that the MkLink command can fix, by leveraging a feature of NTFS.

Another offender is Windows Installer, and the Windows update process, which also dump inactive files and “undo” junk in C:, with no facility to move it off.  I have not tried MkLink to address that problem.

Previously, these issues would annoy only those who don’t follow the duhfault “everything in one big doomed NTFS C:” approach to hard drive partitioning, but that is changing as SSDs mirror the practice of having a deliberately small C:, with large seldom-used material on hard drive volumes other than C:.  I’ve done this for years as a way to reduce head travel; SSDs do away with head travel, but are as small as the small size I’d use for an “engine room” C: containing no data.  Update and Installer bloat really hurt on today’s sub-PC mobile devices, for which puny SSD capacities are all that is available.

An invaluable tool to chase down space invaders, is Windows Directory Statistics.  You can add a non-default “Statistics” action for Drive and Directory (File Folder) to run this, but this will misbehave when the target is the C: drive on Windows versions Vista, 7 or 8; post-XP changes will cause these to show System32 instead of all C: in this particular case.

An Unusual Case Study

I did a site visit to troubleshoot file sharing issues on a serverless LAN of five Windows XP PCs.  All four of the “workstation” PCs in the office would shows the same unusual error dialog at the point one attempted to navigate into the workgroup via Entire Network, Microsoft Networks UI; the error referred to “insufficient storage”. 

All four of the “workstation” PCs had well over 10G free on C:, but the seldom-used fifth “backup” PC in another room had zero k of free space on C:, and fixing that, fixed the problem seen everywhere else.

In this case, the problem was caused by an insanely large log file for the “security toolbar” component of AVG Free 2012.  Now I always avoid installing “toolbars”, which are (nearly?) always useless things inflicted to serve the software vendorsinterests rather than those of the user, but updates (obligatory for a resident av) may reassert them.

A lot can go wrong with log files.  They’re typically opened and closed for every write, and are written to often, so that there’s a log up to a point of failure that could lose pending file writes due to a crash.  There may be unexpected overhead imposed for each time the file is opened and closed, which can make things even more brittle, and there’s often no sanity-checking on log file size, so a crash-log-repeat loop can get really ugly, real fast.

AVG’s prone to this sort of nonsense; in addition to large wads of update material, partially held in non-obvious places (MFAData, ProgramData subtrees, etc.) it can also spawn gigabytes in Dump files.  This is the second time I’ve seen an out of control AVG log file taking every available byte of space, and it’s annoying when this is due to an unwanted “toolbar” component that should not even be installed.  Without FAT32’s 4G maximum file size limit, this “text log file” grew to 5.1G, leaving zero space free on C:, so that “touching the network” caused needs that could not be met.