28 April 2017

Windows 10: Microsoft's Update Strategies

This post is an expansion of a comment to this article, and is a big-picture look at how Microsoft is changing the version-SP-patch model through Windows 8 to 10.  This fits the usual "prototype sucks, revision rocks" tick-tock of Windows version popularity we saw with Windows 1000 to XP, Vista to 7, and 8 through to 10.  I hope we never again see a "free" upgrade offered only through the Microsoft Store as we did with Windows 8.1, which is still impossible to safely find as a complete offline installer to upgrade from the Windows 8 RTM ghetto.

The scalability problem

The reality is, there are too many code defects that are too promptly exploited, for us to keep up with on a manual basis - and this mirrors what we see with malware.  Remember those detailed articles on individually-named viruses?  These days it's "Trojan.Downloader.A6FE320P", there are too many to keep up with with that level of attention, and our defenses have changed from expert av vendor analysis, to crowd-sourced reputation; more but lower-quality input opinions.

The process of defending against new threats, and exploiting new opportunities, is rather similar.  To defend, you compare pre- and post-infected states, accurately define the difference and code a removal tool, then roll that out as an av update.  To attack, you can compare pre- and post-patched states, work out the difference, and code to attack the unpatched state while that's still available in the wild.  You can also hold malware out of av's hands to delay defense, and you can find your own "zero day" exploit opportunities to attack, rather than the easier (automatable?) process of reverse-engineering patches.

The code quality problem isn't cured by removing commercial motivation; Linux shares the need for patches, and some horrible exploitable defects existed for years in Open Source code, despite the mythical mass of eyes to examine source.  That meant something when computer users were also computer coders, and the code was simple enough to read as source while the compiler slowly ground out the executable form that would crawl the globe from there via floppies - it fails to scale!

It's really hard to assess code for fitness for use, when emergency vendor-pushed patches change it faster than you can test.  Before, you'd read the source, or test the closed-source product, then roll it out - and from then on, the defense was "no-one changes the code except me".  That's pretty much dead in the water; you are forced to trust your software vendor, from the top of the trust stack ("do their objectives align with ours?") to the bottom ("are they competent enough to do what they intend to do without exploitable mistakes?")

So under these circumstances, I like what Microsoft is doing, as it re-defines the software update and upgrade landscape from the traditional costly/destructive versions, safer but large SPs, and sprawling mass of individual repairs.

Removing the update catch-up pain

I've just done a couple of Win10 upgrades to build 1703, and considering this is an in-place version upgrade followed by a patch catch-up, it was far simpler than the XP or 7 process of running through 7+ online passes of WU and gathering loose patches as handfuls of gravel..  Setup.exe from the Media Creation Tool file set, the latest Cumulative, and all done while still offline; very little to catch up (thus low Internet use) online thereafter.

You do still have to do clean-up afterwards; check settings, especially for new features that default to vendor-friendly behavior that you may find objectionable.   Expect unplugged peripherals to have vanished, and these to re-install themselves when first plugged in; other programs may behave as if freshly-installed.  You may have to repeat many of these cleanups, first-use prompts etc. for each user account, which is a good reason to avoid multiple user accounts.  You will be pushed towards using the new "Modern"/"Metro" apps, OneDrive, and automatically logging into an online Microsoft Account.  I covered the specifics for build 1703 here.

How good can you expect updates to be?

The basic concept of trusting patch quality is flawed, because these patches are written via the same process that was used to create the code so defect it needs to be patched - and patched so often, we can't keep up manually, and have to allow the vendor to shove so many patches we can barely count them all, let alone write unique documentation for each.

But wait, it gets worse... the original code was developed at a leisurely pace, was installed on a "clean slate", and constitutes a single version.  In contrast, patches are developed at a rush, and rolled out onto installations that have diverged due to existing application and patch loads - how well do you expect that to work, given the original "clean slate" batting average?

Now consider all the permutations of the OS that need to be tested and patched - ignoring hardware, drivers and other added software.  In the XP and Windows 7 eras, you'd have 2 or 3 supported OS versions, each with 1 to 3 SP levels,. and then a mesh of all the individual loose patches that may or may not be installed.  Each driver and application will also have its own mesh of such patches - leading to a factorial function of all this... and there's a reason why that function is called "shriek!"; even small input numbers spawn massive output values.

Who will win the arms race?

That makes this an arms-race between repairs and defenses, and exploits and malware - but here too, it gets worse.  The repairs have to work on all systems, and can only be distributed through legitimately-permitted channels.  The attacks just have to work some of the time, regardless of collateral damage, and can be spread through botnets of unwilling "servers".

So if it's an arms-race, who will win?  Those with the most resources, so you can see why there may be confidence in the US and Chinese software and hardware industries, for starters.  But while the unit processing power of the human mind remains fixed, that of processors continues to grow; even if attention switches from more power per processor to mobility and convenience, more effective aggregation of processors (something we can't improve as fast for meshing of minds) means the eventual winners will more likely be the AIs.

So, what MS is doing, makes sense

1:  One version of Windows going forwards, even if that means losing putative upgrade revenue.  The shorter supported lifespan of SP levels of "the same" OS creates a loophole that will shrink the version load to 1, once 8.x and 7 age off the Internet (Ha!) or at least out of Microsoft's obligation to test and fix.

2:  Clean new version upgrades.  Remember the advice to "always install clean, never one version over another"?  We now do that with every new build, but the process is more robust; staged roll-out guided by telemetry, installed apps are re-installed afresh (look at install dates in Control Panel, Programs and Features after a new build), and more reliable Undo via Windows.old... so now we see Microsoft matching the common Linux practice of new versions twice a year, for free, with the equivalent of LTS versions for those preferring a slower pace of change.

3:  Cumulative updates.  Finally!  Yes, the downside is lack of detailed control, the ability to install all but one or two particular fixes - but the upside is a vastly simplified mesh of grain-of-sand version creep.  Catch-up is as easy as "install the newest Cumulative", and these are available as complete offline installers, without the madness of the "Store only" "free" 8-to-8.1 upgrade.

4:  Business vs. Consumer track - and you get to choose, if you stump up the cash for Pro (which increases MS's support load due to that second version track, so the money is somewhat earned).

5:  Separate Security vs. Feature updates; originally as "fixes only" vs. "fixes plus features".  This announcement suggests a 3rd "features only" form that can be released earlier for testing, so that when Patch Tuesday comes, sysadmins can decide (on the basis of their own testing) whether to install "fixes only" or "fixes plus features".

What's happening here, is happening in the bigger pictures of system to network to cloud... we are re-abstracting from details dictated by technological fault lines, towards what we understand as the human concepts by which to assess and decide.

So, instead of "we want only KBx, y and z because they are critical, but we don't need a, b and c because they're just features", we can just grab the bag off the shelf labeled "critical fixes" and leave the one called "features" on the shelf.

Will this work?  

Perhaps, and I hope so.  By reducing the version sprawl, we may get better quality fixes that work more reliably across all systems.  By generating revenue from sources other than one-off version sales, Microsoft can better align revenue with the unforeseen major cost of repairs.  And if the re-abstraction works well enough for us to no longer care about individual patches, we'll have moved up a level, as we are hoping to do via virtualization and cross-server fail-over to reach the cloud, where we no longer care what system our stuff is actually being stored or run on.

I think we'll always have to worry about "technical" details like that, but the expanding complexity means we will ironically have to trust AIs to manage it all, especially in the real-time arms-race between exploit and repair.  We stopped manually routing our dial-up calls to BBS phone numbers long ago; eventually we may do the same when having out storage and processing done across an arbitrary mass of other people's computers.

Do I have to drink the Kool-Aid?

No, you don't.  You can still (as at April 2017) keep your stuff on your own system, be clear on where that system ends and the Internet begins, and remain functional offline.  But you will have to defend yourself and your system against ever-pushier vendors and UI pressure to Just" let them push updates, or add this "recommended 3rd-party software", or pipe your data to an "online service" you thought was locally-running software, or to have your data whisked off to their cloud service.

The last seems odd; it costs real money to run a cloud storage back-end, so why would every Tom, Dick and Harry (OS, system and component vendors, etc.) want to host your data for free?  Well, ask yourself; what's even better than having your software on user's systems where it can snoop and do stuff?  Having the user upload their stuff at their own cost, to your own servers, where you can snoop and fiddle unseen.  As to the cost of hosting this storage and processing, that can be farmed out to the lowest subcontractor, or if no pretense at legitimacy is needed, the cheapest botnet.

There will likely come a time when having an offline system will be viewed with the same suspicion as retaining firearms rather than entrusting your safety to law enforcement ("if you are innocent, you have nothing to fear") but frankly, I'm much more comfortable carrying my own computer than a gun.

27 April 2017

Blog Re-Edits?

Should one re-edit existing blog posts?  If so, should these be brought forward in the date order to the date of last edit?

The answer may depend on why you blog - if it's supposed to be date-sensitive, like a diary, then probably not.  But if you're using a blog as the quick-and-dirty way to create a web site, then as you add content, you may want to link to the new content from your "older" posts.  Such is the case with the "waking hour" content that kicked off from today's "layers of abstraction" post...

Layers of Abstraction

This could get long, and if it does, I'll break it up into separate posts, with this one being on what I mean by a "layer of abstraction".  Later I'll go onto whether these are artificial contrivances of convenience or hard entities, the nature of their boundaries, the truth or otherwise of "as above, so below", and whether these differences can be tracked back to a small number of meta-level inputs such as the "good or evil" duality.  I'll pepper all this with examples, each of which would be a field of its own, but here I'm interested in whatever common essence can be extracted.


We don't experience stuff as a continuum, but as mind-sized chunks that I'll refer to as "layers of abstraction".  Below and above these are things that may affect that layer, but which cannot be fully described or explained within that layer alone - bringing Goedel's Incompleteness Theorem to mind.

Example: The Evolving Infosphere

At the analog volts and seconds level, it's all about transistor design and the behavior of the electronic shells of different atoms, which in turn drills down to the unique behaviors of small integers - but we don't consider that depth; it's all about what shape and size of blocks of tainted materials to join together to make effective transistors and other components.

At the more familiar digital level, we abstract out all of the analog stuff; nuances of time and voltage are simplified down to x volts = off and y volts = on, and time is pixellated into clock pulses.  But this layer of abstraction is supported by the analog layer only as long as it successfully slews voltage between the "on" and "off" levels, within the timing of a clock pulse - when this fails, the digital layer of abstraction breaks down in ways that make no sense within digital logic.

We have then aggregated transistors into chips, so we no longer have to think about individual transistors; chips onto circuit boards so we no longer think about chips, boards into systems, systems into networks, and networks into The Internet.  When you integrate ourselves and our code as actors within this Internet, you can consider the whole as the infosphere, with its own dynamics of function that may emerge differently to the raw inputs of original human intentions, etc. [discuss: 100 marks]

Building the Infosphere

Our individual minds can only mentally handle a certain volume of complexity, and scaling up by pooling our minds only takes us so far - as well as adding extra wrinkles in imperfect communication between these minds, as well as differences within such minds that cause them to misunderstand each other, differ in objectives, etc.

So, as we've grown the infosphere, we've done so by attempting to simplify the previous abstraction layer to the point we can take it for granted, then build up the next layer.  The internal components of a computer system operate well enough at realistic clock speeds so we can ignore transistors and whether they're loose, within particular chips, or whether those chips on on the same board.

When networking works well enough, we can ignore which system a particular file is on - all systems can be blurred together to be considered as "the network".  Because the Internet and networks are built from the same TCP/IP materials, it's tempting to treat them as the same, ignoring a fundamental difference to our cost; entities on a network may trust each other, but those on the Internet should not!

A set of technologies allows us to melt the edges between systems and networks further; communications tolerably as fast and cheap as internal data flows and storage, effective virtualization of OSs as if they were applications, tolerably effective isolation via encryption, tolerably seamless load distribution and failover between systems and networks (where already, those two words approach interchangeability).  And so we have "the Cloud", which seagues undramatically into AI and The Singularity. [discuss: 10 marks - it's really not that big a deal]

Other Examples

Other examples of layers of abstraction are visible light within the full electromagnetic spectrum, rhythm and pitch within the range of sound frequencies, and the micro/macro/astro-scopic scales.  All of these are based on the limited focus of our senses, which we've artificially extended.

Another example; chemistry, with nuclear chemistry below and biochemistry above.  This is a tricky one, because the floor of this abstraction layer seems hard and natural (and interesting - we'll likely come back to that later if we further consider the uniqueness of small integers) while the ceiling is more a matter of our mental limitations, plus the chaotic way that new complexity emerges (and more on that later, too!).

Consider written language; at its base is a combination of two symbolic layers of characters, and the text that can be constructed from these.  It would be interesting to compare the information efficiency (simplistic metric; .zip archive size?) of a rich character set vs. longer words of simpler characters, which is similar to the RISC vs. CISC arguments of the 1980s.  That processor debate appeared to be QED in favor of Intel's CISC, but is now re-emerging with the rise of ARM at a time when our needs and capabilities have changed.

Number theory examples abound, and is probably the best place to test predictions of closure (Goedel) and emergence; real numbers, rationals, integers and so on, and the nature of the "infinities" as expressed within these number systems.

20 April 2017

Windows 10 Creators Edition 1703

Updates for Windows 10 are streamlined into monthly cumulative features and fixes, a smaller version of just fixes, and periodic new builds.  These new builds are similar to new versions of Windows, and come out about as often as new versions of popular Linux distros; they don't appear in the Catalog, but are provided as new OS installers instead.

Installing Build 1703


Since the GWX offer, we’ve become more comfortable with installing new versions of Windows over existing installations, something we’d have usually avoided in the past.  It’s still a brittle process, sometimes leaving the system tied up behind a black screen for hours on end when things go wrong.

If yours is one of those systems that gets lost in “do not switch off your computer” or black-screen space, then it’s best to install the upgrade more formally.

First, use the Media Creation Tool to create an .iso of the new Windows 10 installation disc for your system’s edition of Windows.  Do not click the in-your-face “Update now” button; scroll down to “Using the tool to create installation media…”, as what you want is the .iso to build a bootable DVD.

The tool is a small download, which will do the real work when you run it.  It will default to building an installer for your system, so if you want something different, UNcheck the relevant check box and then choose your edition and bits to taste.  The same installer will work for both Home and Pro editions, but if you have Single Language (i.e. you’d GWX’d from Windows 8.x SL) then you will need to select the specific edition for that, else your Activation key won’t work.

The tool will take ages to first download the material, then build it into the .iso, then clean up afterwards.  As downloaders go, it’s not slow; it’s just a lot of material!  Once downloaded, you can copy the contents of the .iso (either directly via 7-Zip, or after making the disc) to a permanent “source” subtree on the system to be upgraded.  You can also add the most recent Cumulative update from the Catalog web site (the link should open to April 2017 Cumulative; navigate from there if reading this in later months).

Second, clean up your C: partition via Disk Cleanup, and make a partition image backup of this as your Undo, should things go pear-shaped.  If your hard drive is set up as ”one big doomed NTFS C:” then this will be a capacity challenge, as will GPT partitioning, which reduces your choice of tools and undermines confidence that a “restore” will work.  I use Boot It New Generation (BING), an old free product from these guys; their newer Boot It Bare Metal is a lot naggier and less useful in free form, but does work with GPT partitioning.  It’s also a good idea to exclude malware and do other cleanups before you make this “undo” partition image.

Third, get off the Internet and all networks, stay off the Internet throughout the installation process, and when you see “check for updates” during the setup process, UNcheck that.  This will limit the installation to what is in your pre-downloaded source material, avoiding update-of-the-week surprises and the tar-pit effect of flaky Internet access and performance.

Fourth, run the Setup.exe for the new build, from an always-available location, e.g. local hard drive volume other than C:, that can be a long head-travel away but is always present.  That way, any future references to the installation source can properly resolve.

Fifth, after the new build is installed, run the pre-downloaded Cumulative if you have that, then check settings etc. before going online for the first time, and doing online updates.

Checking for Lost Settings


After a successful “feature” update, there’s likely to be new features set up with unsafe duhfault settings, so you need to check Settings in general, and Privacy in particular.  Expect to see new additions allowed to use the camera, mic, and run as background processes; fix to taste.

However, there are some unexpected lost settings, especially as Microsoft pushed their OneDrive cloud storage service.  What’s better than having your code on users’ systems that can snoop their stuff?  Having users spend their communications dime on sending you their stuff so you can play with it unseen on your servers… hence so many vendors pushing cloud storage offers.

This article shows the new install-time privacy summary options, but what this doesn’t tell you is that you’ll not only see this when updating an existing Windows 10 installation (at least as done by running the .iso file set’s Setup.exe from within Windows), but the settings will ignore what you’d previously set, and start off with “everything on” duhfaults.  So, watch that screen and make sure you scroll it down to check all settings anew.

Windows 10 may turn off System Protection by default, and installing the new Build 1703 disabled this although I’d previously enabled the setting.  My systems use MBR partitioning with shell folders relocated off NTFS C: to FAT32 logical volumes on an extended partition, and maybe this influences how Windows 10 treats this setting; the same may apply to mobile systems with puny flash storage that have to use mSD cards to extend “internal” storage in a similar way.  With System Protection disabled, you’ll lose not only Previous Versions of files stored on C:, but also System Restore.

If you’d turned off Live Tiles, you will find all of them turned back on after installing Build 1703.  You should also check the registry setting to kill Live Tiles (i.e. stop external sources from squirting content directly into “your” desktop UI), in case that was cleared:


Settings to curb OneDrive are likely to be lost, so check those, as well as adding a setting to reduce UI spam that pushes the cloud storage service.  Expect unwanted UI popups to “just” set up OneDrive, some days after the 1703 upgrade; a fairly common vendor tactic that aims to catch the user after their tech has walked away after doing the upgrade.



Yep, both of the above settings were enabled, after 1703, which re-enabled OneDrive integration into the shell.  If that’s not what you want, you need to re-assert those settings.


The above setting blocks OneDrive spam delivered as a “sync notification”, and is worth asserting, though you’ll prolly get ongoing UI pressure to “just” sign up a Microsoft online account and/or use OneDrive.  While you’re there, you may want to check these…



…which reduce info exposure on the locked side of the “lock” screen, and reduce AutoPlay risks when arbitrary external storage is detected by the shell.  For the latter “hello, Stuxnet” malware risk, I still use…


…to disable AutoRun and AutoPlay on basis of both device type and drive letter.  The latter setting is a bit field for drive letters, and you can edit to enable particular letters only.

If you prefer to disable Windows Scripting Host, you may find some of the settings will have been lost after Build 1703, so check these…


[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Script Host\Settings]

…as I found the last two were lost after 1703.

There’s prolly more side-effects and collateral damage that I’ve missed; feel free to add such tips via Comments!

13 February 2016

Open Live Writer

Yes, Open Live Writer works with Google Blogger, as at 13 February 2016.

But does it double-space paragraph separation?  And does it fing Windows Live Writer’s drafts?  I think it will, as I see a populated list of “recent” <cough> posts.

12 February 2016

I’m Baaack…

Yep, after a long hiatus, I’ll be blogging again – though unlikely to be often.

This may be my last blog entry created in Windows Live Writer 2011; by now I thought a new version may have better features, and I see Microsoft has one new version from 2012, then there’s an Open Source fork called Open Live Writer since. 

I’m going to download and tray the latter, but I see one looming issue; Google’s changes to blog access, that Live Writer may not “get” yet.  Then again, the Open Live Writer blog makes reference to fixing that, perhaps… let’s see how it goes, and whether it does paragraph spacing correctly without double blank lines, in keeping with HTML’s chaotic handling of white space characters etc.

16 July 2013

Hard Drive to .VHD

Let’s say you have an XP PC that died, and you want to run that installation in a virtual machine (VM).  The first step will be to harvest that installation into a virtual hard drive.  Different virtualization host software supports different file types for virtual hard drives, but I’ll be using the .VHD standard, which should work in Virtual PC, VirtualBox and VMware Player.

Preparing the physical hard drive

The hard drive’s from a failed PC, so step one is to be sure of the hardware and file system.  The drive is taken out of the dead PC, and dropped into a known PC that is set to boot off safe maintenance OSs (mOS) and tools such as Bart, WinPE, Sardu, BootIt Next Generation, etc.  Do not allow the hard drive to boot!  You will also want to connect another hard drive with enough space to swallow what you will harvest, and make both connections using internal S-ATA or IDE rather than USB, for speed and SMART access.  FAT32 limits size of a file to 4G, so if you are using Disk2VHD or capturing a .WIM image, your target drive file system should be NTFS.

First, I boot my Bart DVD and from there, launch HD Tune, and look at the SMART details.  You can use any SMART tool you like, as long as it shows the raw Data column for these four attributes: Reallocated Sectors, Reallocation Events, Pending Sectors and Offline Uncorrectable sectors.  For all of these, the raw data should be zero; if it does not, evacuate the stricken drive first as files, then as partition image, before doing anything else, including surface scan and other diagnostics.  Don’t trust SMART tools that only show “OK” or “Fail” status, that’s next to useless.

If the hard drive is physically OK, then check and fix the file system from a compatible mOS.  If the volume to be salvaged is NTFS, then the version of mOS should be same or higher as the OS installed on the drive.  So you can use WinPE 4.0 for any Windows, Windows PE 3.0 for Windows 7 and older, WinPE 2.0 for Vista and older, and as we’re after XP in this case, we can use any of those plus Bart, which is built from the XP code base.  Run ChkDsk /F and capture the results, either via command line redirection (you may have to press Y if nothing happens, in case there are “Do you want to…?” prompts you can’t see) or by capturing text from the command window.

Next, you will want to retrieve the Windows product key from the installation, in case that has to be re-entered to activate Windows when booted in the different (virtualized) PC.  I use Bart as the mOS for that, along with two free add-ons.  The first is the RunScanner plugin for Bart, which “wraps” the inactive installation’s registry hives as if that installation had booted these into action, so that registry-aware tools will see these as native (unfortunately, there’s no equivalent to Runscanner for Vista and later).  The second is ProduKey from Nirsoft, which reads the keys you need.

The final preparation step is to zero out free space so that these sectors are not included by certain types of harvesting process, as they will bloat up the size of the .vhd you will eventually create.  You can download SDelete and use that from Bart; the –c –z options will create a file full of zeros to fill the free space, and then delete the file.

Harvesting as loose files

In Windows 9x, if you copied every file to a new hard drive and created the correct MBR and PBR contents, Windows would boot and run just fine.  This is no longer true for NT-based OSs such as XP, but you may still want access to loose files, and if the drive is failing and dies soon, that may be all you get - and more useful than a partial partition or installation image.

I use Bart as the mOS for this, finding it easier to work from a GUI shell than command line.  I have Windows Directory Statistics integrated into my Bart, and use that to compare file counts to be sure I haven’t left anything out.

Harvesting as partition image

There are various partition managers that can boot off CD, and the one I’ve been using is BING (Boot It Next Generation).  If you use BING in this way, it may show an installation dialog when it starts; cancel that, as you don’t want to install it as a boot manager.  Then go into partition maintenance mode and work from there.

You can also boot BING within the virtual machine, from physical disc or captured .ISO, as long as the VM’s “BIOS” is set to boot optical before hard drive.  BING can be used in this way to resize partitions within .VHD, but cannot change the size of the “physical hard drive” as seen within the VM; use VHDResizer for that, from the host OS.

BING can also save a partition or volume image as a set of files, of a maximum size that you can select to best fit CDRs, FAT32 file system limitations, etc. and that’s quite an advantage over .WIM and .VHD images and the tools that create them.

When in BING, it’s a good idea to display the partition table, and take a picture of that via digital camera, in case there are any “surprises” there that kick in when attempting to boot the virtual machine later.

I can also use DriveImage XML as plugged into Bart, as a tool to create and restore partition and volume images; unlike BING, you can browse and extract files from within the image it creates.  There are probably other backup tools that can do the same, but make sure they have tools to work from bare metal, and that these tools work within virtual machines, as Bart and BING can do.

Harvesting as installation image

You can use Microsoft’s imaging tools to create a .WIM; a file-based partition image with the OS-specific smarts to leave out page file, hibernation file, System Volume Information etc.  Because in this case the original PC is dead, we don’t have the option to generalize the installation via SysPrep; a relief in a way, as SysPrep is so rough you’d want some other full backup before using it.

Access to these imaging tools was difficult at best, in XP and older versions of Windows; you had to be in corporate IT or a large OEM to legitimately get these.  You can now download what you need from Microsoft for free, though it’s a large download, and in any case the full OS installation disc can now boot to a command line and function as a WinPE.

In Vista and 7, you’d use a command line tool called ImageX to harvest (capture) to .WIM and apply .WIM to new hard drives, and can add a free 3rd-party tool called GimageX for a less mistake-prone UI.  In Windows 8, you’d use the DISM command instead, and I’ve not sought or found a GUI front-end for that; instead, I’m using batch files to remember the syntax involved.

Harvesting directly to .VHD

There are said to be many tools for this, but I’ve only found one; Disk2VHD.  There’s also something called P2V, but appears to be part of a costly software product aimed at the corporate IT sector, and may apply more (only?) to Microsoft’s Hyper-V virtualization technologies.

Disk2VHD boasts the ability to image partitions that are in use by the running OS, via that OS’s shadow copy engine.  Unfortunately, that is the only way it can work – so it will not run from Bart or WinPE.  You are obliged to boot a hard drive based Windows to host the tool, exposing the hard drive to be harvested to whatever that OS may do. 

That’s too risky for at-risk hard drives, as Windows tends to fiddle with every hard drive volume it can see.  WinME and XP are the worst offenders, as they enable the System Restore engine on every hard drive volume detected and immediately start writing to those file systems.  Al least Vista, 7 and 8 don’t do that!

It’s important to remember that Disk2VHD captures entire hard drives, not just partitions or volumes, even though the UI implies the latter by allowing selection of partitions and volumes to be included.  For example, if you have a 64G C: on a 500G hard drive and you deselect all volumes other than C:, you will create a virtual hard drive 500G in size with a 64G C: partition on it, the rest being left empty.  You may have hoped for a 64G drive filled with C: but that is not what you’ll get.

Size matters

Guess what the sizes of these various harvestings will be, compared to original drive?  Then check out the results of doing this for real, or a mature in-use XP installation with shell folders relocated out of C:

  • 500G - capacity of original hard drive
  • 30G – size of original NTFS C: partition
  • 12.0G – size of files harvested
  • 13.5G – size of files harvested, as occupied space on FAT32
  • 7.48G – size of BING image file set
  • 4.69G – size of .WIM image created from WinPE 3.0 and ImageX
  • 12.6G – size of .VHD file as created by Disk2VHD after zeroing preparation
  • 10.8G - as seen within .VHD via Bart boot within the VM

Note that .VHD are ignorant of the file system and OS within; this is why it’s inappropriate to blame the tool’s creators when a harvested installation fails to boot within a VM.  A significant effect of this is that any sectors containing anything other than zeros, will be included as explicit blocks within the dynamic and differencing types of .VHD, which would otherwise have saved host space by leaving out empty sectors.  The .VHD manages space in large blocks, so this effect is made worse; if any sector in a block is non-zero, the whole block is added to the .VHD

Of these, the .WIM is the most compact (I capture using the strongest compression offered); then the BING image file set.  After that, things are pretty much as you’d expect, though even with zero optimizing preparation and before using the .VHD, the (dynamic) .VHD file is already significantly larger than the files it contains.

Creating a new .VHD

If you used the Disk2VHD tool, you already have your .VHD populated – but it may not be a physical size (as seen from within the VM) that you’d like.  In theory, if the partition size is limited, the “physical” space outside that should never be written to, this never contain anything other than zeros, and thus never add size to any dynamic or differencing .VHD file on the host.  In practice, you may prefer to constrain the physical size of the virtual hard drive, especially if choosing the fixed type of .VHD that always contains every sector as-is, regardless of content.

When creating a new .VHD you set the capacity of the hard drive it pretends to be, and whether the .VHD will be of fixed or dynamic type.  The fixed type is the .VHD equivalent of a fixed-size pagefile; if it’s created as an unfragmented file, it should perform better than one that grows in fragments over time.

Either way, your host volume should have enough free space to contain the full size of the .VHD’s internal capacity, or at least that of all partitions and volumes within the .VHD you intend to ever use.

You can also layer .VHD over each other, with a fixed or dynamic .VHD as the base.  Each layer above that will be a differencing .VHD, valid only as long as the lower layers do not independently change.  Both differencing and dynamic .VHD use the same storage model, which is like a “super-FAT” chain of large blocks that explicitly exist only if any contents within have changed, relative to the layer below.  Under the base layer is assumed to be all zeros, so a block that has never contained anything other than zeros, need not explicitly exist in any differencing or dynamic .VHD

That means every .VHD layer may grow as large as the size of all partitions and volumes within it; host free space should be available accordingly.

Because changes to an underlying .VHD layer will invalidate all layers above, they are generally used in two ways.  You can have a base image that is kept read-only so that it can never change, and this can be the “installation image” over which multiple VMs can run, each with their own changing differencing .VHD; this is how XP Mode is set up.  You can also use a differencing .VHD above that, which is considered disposable until it is merged with the .VHD below; you may use that for guest accounts, malware investigation and other testing, kiosk use etc.  Virtual PC may use this as “undo” disks, though these use the .VUD rather than .VHD file type.

To merge .VHDs, you need enough free disk space on the host for the output .VHD that could be as large as the .VHD being merged. in terms of workspace required.  To compact a .VHD (i.e. discard any blocks full of zeros in dynamic .VHDs) you need enough free host disk space to create the new .VHD; these considerations make .VHDs costly in terms of disk space and hard drive head travel to partitions beyond where they are stored.

Populating the new .VHD

If you used partition imaging tools like BING or DriveImage XML, or installation imaging tools like ImageX from WinPE, you will need to write these images to the new .VHD you created above.  You may also need to move contents from one .VHD to another, if you need to change the base .VHD type, or don’t want to use VHDResizer to change the size of the physical hard drive contained within the .VHD

One way to do this, is by using these “real” bootable discs within the virtual machine, either by booting the VM from the physical disc, or by capturing the relevant .ISO and booting the VM into that.  If you can’t “see” host hard drive volumes within the VM then the materials should be copied into another .VHD that is attached as an additional hard drive before starting the VM session.  You can do that in suitable host OSs (e.g. Windows 7) by mounting the .VHD for use as if it were a native hard drive volume; otherwise you may need a VM that can see outside via network shares etc.

4 July 2013

Build One Skill in 2013? Virtualization

My life’s been fuller for the last five years and will hopefully continue to be, so I generally don’t devote such huge amounts of time to particular interests.  Still, whatever time I do have to build new skills, I’ll probably spend on virtualization, i.e. running one OS within another.  That competes with Windows 8 for attention; I’ve slain the closest crop of Windows 8 dragons, but that’s another story!

I have three “egg-eating snake” client jobs at the moment, i.e. with jobs that can’t be fully gulped down in one session.  If they’re reading this, they will recognize themselves; common to all three cases, is (or may be) virtualization.

Client A had an XP PC that failed at the motherboard-and-processor level, with several installed applications that couldn’t be reinstalled on a modern Windows PC for one reason or another.  The hard drive was dropped into a tested-good used PC of similar vintage, various dragons were slain etc. until all was working, but delivery was postponed while a courtesy laptop was in use.  During this time, the PC became slightly flaky, then increasingly so; hardware tests were initially OK, but over several months, it has deteriorated to the point it is clearly unfit for use.

By now, a large amount of work has gone into the system; how can one preserve that value?  I could look for another set of XP-era hardware to host the hard drive installation, but it may be better to harvest that into a .vhd and run it virtualized within a modern PC.  Once in a .vhd, it should survive being ported from one PC to another, as long as the same virtualization solution is used to host the .vhd – but will the installation survive the conversion, and what should the host be?

Client B is already happily using XP Mode on Windows 7 to keep old software alive, but a crisis is looming because the .vhd is growing in size to fill the host’s C: space.  XP Mode is built by combining a pre-defined read-only .vhd with a differencing .vhd; these combine to produce what XP Mode sees as its “C: drive”. 

Within XP Mode, this virtual “C: drive” is 126G in size, with only 5G or so used.  But outside in the host OS, while the parent .vhd is a comfortable 1.1G in size, the difference .vhd is an absurd 19G+, leaving under 1G free space.

Management of this issue is constrained by .vhd decisions cast in stone by XP Mode, and it’s not clear whether the installation will survive changes to the .vhd (e.g. merging the parent and child into a non-differencing .vhd, transferring contents to a fresh smaller and/or fixed-size .vhd, etc.).  It’s also unclear whether there’s any predictable maximum size limit to this .vhd bloat, and thus whether a one-off enlargement of the host C: partition (at the expense of extra head travel to other volumes) will permanently fix the problem.

Client C has a new laptop with a geographically-broken warranty/support trail, that has an edition of Windows 7 installed that does not match the OEM sticker.  After a failing hard drive was replaced, Windows demands to be activated, and this fails with both the placeholder key used within the installation, or that from the OEM sticker.

So he has an “upgrade opportunity” to choose (and alas, buy) whatever version and edition of Windows he likes, and this choice is complicated by the need to run an important “needs XP” application that hasn’t yet been tested within XP Mode or other virtualization.  Which virtualization host should he use?  That choice affects that of the OS; Windows 7 Pro for XP Mode (the solution in which I have the most experience), Windows 8 Pro for Client Hyper-V (may be faster, may integrate less well, needs XP license) or client’s choice of cheaper 7 or 8 if using VirtualBox or VMware Player (both of which will also need an XP license).  Where are such licenses going to (legally) come from, in 2013 onwards?

Common to all three clients, is a need for virtualization skills.  I need to be able to convert from “real” installations in hard drive partitions to .vhds, get these working in at least one free virtualization host, and be able to manage file size and other issues going forward.  XP Mode integrates well and includes the XP license, but dies with Windows 7 and needs the costlier Pro edition; it may be better to abandon that in favor of VirtualBox or VMware Player, which aren’t chained to particular editions and/or versions of Windows.  If those also work from Linux, seamlessly hosting the same .vhd installations, then that would be a deal-clincher; I could skip bothering with Windows 8’s Client Hyper-V altogether.

There are more details (and especially, links) on these scenarios in this recent post.

Why I (Don’t) Blog

If you find yourself in the situation where you have to present a non-trivial amount of information to people, success may depend on the method of communication you choose to use.

Face to face

Rich impression, poor retention, and very demanding of resources!  When someone’s taken the trouble to be physically present, you should put them first, above lazier contacts such as phone calls, and neither of you will get any other interactive work done during your meeting.

So if there’s a non-trivial amount of contact to deliver, rather send that via fax or email ahead of time, so that the meeting can be pure live interaction rather than “death by PowerPoint”.  Make sure you allow enough time for this pre-meeting information to be processed; in the days before email, I’d budget a day per fax page.  Need folks to read a 5-page fax before a meeting?  Send it at least 5 days before.  Not enough time?  Edit the content down to fewer pages.


The pits; in my opinion, the “voice telephone” model of paid-for real-time voice-only calls should have died in the 20th century, along with telegraph Morse code.  Very intrusive real-time demands, no content logging or retention for either party, and worse communication effectiveness than face to face (clothespeg-on-the-nose sound due to poor frequency response, no non-verbal gesture/expression cues, etc.).  And then there’s “telephone arm”.  What was the other idea?

I’d also refuse to do any significant interactions over the phone, especially with larger entities that “record calls for quality purposes”.  If things escalate to a courtroom appearance several months down the line, guess who’s going to look like the unreliable witness?


Ahhh yes, now we’re getting somewhere.  No cost (at least if you’re on ADSL rather than some ghastly rapaciously-priced mobile Internet access), text chats are logged, files can be sent and links or text pasted in and out, and there’s sound and video available too.  It’s real time, but “sticky”; a pending chat is more likely to grab your attention when returning to your system than an emal would, and can be continued at any time.


Best of the lot; excellent logging and reusability, the best way to send content with the lowest risk of “glaze over” (which pretty much kills phone and face-to-face for anything substantial).


If you find yourself having to say the same thing to different folks again and again, then it’s better to blog it and say “yonder post refers”.  That is why I started blogging, after getting that “stepping on ants one by one” feeling when I was active in usenet technical news groups.

But I find it much harder to write when I don’t have a specific audience in mind, and that is why I blog so seldom.  In fact, many of my blog posts are generalizations of content originally written for a specific person.

Living With(out) XP

Microsoft support for Windows XP SP3 (the last supported SP level for XP) is due to end April 2014.  By “support”, I don’t mean the help line center, but the testing and repair of exploitable code defects, as well as perhaps technical assistance to software and hardware developers.  Articles on vulnerabilities won’t state XP may be vulnerable; they will simply ignore XP as if it never existed. 

New or revised hardware will likely not have drivers for XP, and new software will start to exclude XP from the list of supported OS versions.  This is likely to be more severe than it was for Windows 2000, because there’s a far bigger difference between the 2000/XP and Vista/7/8 code bases as there is between 2000 and XP, or within the Vista to 8 spectrum.

The trouble is, many of us still have software that “needs XP”; that gulf between the 2000/XP and Vista/7/8 code bases makes it far harder for XP-era software to work on newer Windows versions, even as these versions attempt to support such limitations in various ways.  Some of this software may have hardware dependencies that can no longer be met, or delve “too deep” into OS territory for virtualization to manage; examples include licensing protection code, close relationship with hardware (e.g. DVD burning software), and deep-system stuff like diagnostics, data recovery or anti-malware activity.

There are different ways to accommodate old software:

0)  Keep (or find) an old system

If you already have an XP PC running what you need to use, then you could keep that – as long as it lasts.  Monitor the hardware’s health, by:

If the PC fails, you can try and repair or replace it, while preserving the existing installation.  Back that up, as the first boot may well fail in various ways; BSoD STOP crash, lockup, demand to activate Windows and other vandor feeware, protracted (and possibly doomed) search for new drivers, etc.  If you can get the hard drive installation working within tested-good hardware, that’s probably the best result from a compatibility perspective. It may be your only choice, if you lack installation discs, or pre-install material, and/or product keys etc. for your crucial “needs XP” software.

XP-era kit is quite old now, and reliability is becoming a problem – like buying a used car with hundreds of thousands of kilometers on the clock, to start a road trip across the Outback.  Digital systems are based on analog parts, and at the volts-and-microseconds level, slew times can grow with no apparent problem until they fall outside the digital yes/no detection criteria.  Metal-to-metal contact points get corroded, and you’ll often find an old PC either works fine, or not at all; maybe it “just needs this card to be wiggled a bit” to get it working again, etc.  Welcome to Creepyville, you won’t enjoy your stay.

An alternate approach may be to harvest the hard drive installation into a virtual hard drive (.vhd file) and try getting that to work as a virtualized PC; jump ahead to (4).  You’d run the same “different hardware” gauntlet as dropping the hard drive into a different PC, with added virtualization-specific issues.  So far I’ve had no success with this; it’s been a non-trivial mission to attempt, and I’ve only made the attempt once so far – but maybe better tools will help.

1)  Build a “new” old system

In other words, if an app needs Windows XP and can't run on anything later, why not simply build a "new" XP system?  That would give the most compatible result, and should run well.

In practice, XP doesn't properly embrace new hardware and/or use it effectively, degrading the value of the hardware.  There's no (native, or in some cases "any possible") support for:

  • AHCI, which reduces hard drive performance
  • 64-bit addressing, limiting memory map to 3G or so
  • USB 3.0, so you're limited to USB 2.0 performance
  • 2T+ hard drive capacity
  • Touch screens

Some of the above are not yet relevant (USB 3.0) or may never be relevant (touch screens, for which Windows 8 is designed) but others (AHCI, 4G+ memory access) already bite deep.  The 2T hard drive limit is also currently impossible to breach via internal laptop hard drives, but may become relevant for shared externals.

So building a "new" XP-only PC is something of a dead end, suitable only to shrink-wrap a crucial and irreplaceable application, on the understanding that the system won't be safe for general use (any Internet access, perhaps even inward file transfers via USB etc.).

Availability of new XP licenses is an issue, given that it has long been off the market as a saleable product.  In theory, Home costs the same as the cheapest non-Starter edition on later Windows, and Pro the same as corresponding versions of 7 and 8, but in practice you'd struggle to find legitimate stock.

There’s also uncertainty around the activation of Windows XP after support ends.  At the time the activation system was rolled out, we were assured that when Microsoft lost commercial interest in activated products, the activation system would be disabled so that software could be used without hassles, but it remains to be seen whether that promise is kept.  Worst-case scenarios are where new XP activations become impossible, possibly including those triggered by WGA (“Windows Genuine Advantage”), or even where existing installations are remotely disabled.

2)  Build a new system, app must "take its chances"

There's an element of "app must take its chances" involved in all solutions other than "build an old system" and dual-booting as such.  This is the most extreme case, where one simply ignores the application, builds a no-compromise modern system, and then hopes the old application can be made to work.

There are settings within Windows to treat applications as if they were running in older particular versions of Windows, but these don't handle every case successfully.

The main point of incompatibility arises from changes introduced with Vista, which attempted to curb the rampant malware threat.  Things that applications were formally allowed to do with impunity, are no longer allowed, and some apps aren't satisfied with fake compliance with what they demand to be able to do.  It's like moving from an unlocked farm house to a gated community!

Less likely to be a problem, is the change from 32-bit to 64-bit OS, as required to access over 4G or "memory" (i.e. RAM plus swap space plus allowance for non-memory use within the map).

Some software can break when the hardware is unexpectedly "big", i.e. "too much" RAM, hard drive space or processing speed, independently of the 32-bit vs. 64-bit thing.  But most application software should not have a problem with either set of issues as such, though there are some other safety changes that stealthed in during the change to 64-bit that could hurt.

3)  Build a dual system

It's possible to build and set up a PC to run either one version of Windows, or another.  Only one can be run at a time, the hardware has to be compatible with both, and each OS runs natively, as if it had the whole system to itself.

Hardware compatibility becomes something of an issue; you either stunt the new OS, or you have to manually toggle CMOS settings to match the OS you're booting so that the new OS can run at full speed in AHCI mode.

You should get as good a result as (1), but have the same problems finding a new XP license.  Both OSs have to have their own licenses, which is costly, but new OS can be a cheaper non-Pro edition.

There are some issues where each OS can trample on the other, if the C: partition of the inactive OS is visible.  I've dealt with such issues before, though not yet with Windows 8, and may have to adjust the specifics for that and/or if the free boot tools used previously, have changed version and/or availability. 

In essence, I use a boot manager to hide the “C:” OS primary partition that is not booted, so that each OS can see its own primary as “C:” and doesn’t mess up System Volume Information etc. on the other OS.  All data and other shared material is on logical volumes D:, E: and F: on the extended partition, leaving one free slot in the partition table for boot manager and/or Linux, etc.

4)  Virtualize one system within the other

In practice, that means virtualize the old within the new.  The reverse is possible in theory, but will work less well as the new OS won't get the resources and performance it needs, so your "new" PC would run like a dog in "new" mode.

This is something of a gamble, because not all applications will work when the old OS is virtualized.  Anything that needs direct access to hardware, and/or access to specific hardware, is likely to fail.  That shouldn't apply here, but may, especially if the application fiddles with hardware as part of attempts to prevent unlicensed use - another facet of how feeware sucks.

The choice of parent OS becomes complicated, i.e. from...

In all cases other than XP Mode within 7 Pro, you would need an XP license, as you'd do for solutions (1) or (3). 

If this approach works, it gives a more convenient result than (3) because you can at least run old and new OSs and the apps within them at the same time.  Interaction between the two systems may be as separate as if they were to physically separate PCs on a LAN, unless the virtualization host can hide the seams, as XP Mode may do to some extent.

I've had some experience with 7's XP Mode, but as yet none at all with 8's Client Hyper-V virtualization host.  So while 8 Pro and Hyper-V are more attractive going forward (Hyper-V is a more robust and faster technology than that of XP Mode), they are more of a jump into the dark for me at present.

3 July 2013

Space Invaders

A common hassle is software that hogs disk space on C:, and often the comment is made that “modern hard drives are so big, it doesn’t matter”.  The worst offenders are junk software (hello, Apple) that not only chews disk space, but has no settings to path that store to somewhere else.  This is something that the MkLink command can fix, by leveraging a feature of NTFS.

Another offender is Windows Installer, and the Windows update process, which also dump inactive files and “undo” junk in C:, with no facility to move it off.  I have not tried MkLink to address that problem.

Previously, these issues would annoy only those who don’t follow the duhfault “everything in one big doomed NTFS C:” approach to hard drive partitioning, but that is changing as SSDs mirror the practice of having a deliberately small C:, with large seldom-used material on hard drive volumes other than C:.  I’ve done this for years as a way to reduce head travel; SSDs do away with head travel, but are as small as the small size I’d use for an “engine room” C: containing no data.  Update and Installer bloat really hurt on today’s sub-PC mobile devices, for which puny SSD capacities are all that is available.

An invaluable tool to chase down space invaders, is Windows Directory Statistics.  You can add a non-default “Statistics” action for Drive and Directory (File Folder) to run this, but this will misbehave when the target is the C: drive on Windows versions Vista, 7 or 8; post-XP changes will cause these to show System32 instead of all C: in this particular case.

An Unusual Case Study

I did a site visit to troubleshoot file sharing issues on a serverless LAN of five Windows XP PCs.  All four of the “workstation” PCs in the office would shows the same unusual error dialog at the point one attempted to navigate into the workgroup via Entire Network, Microsoft Networks UI; the error referred to “insufficient storage”. 

All four of the “workstation” PCs had well over 10G free on C:, but the seldom-used fifth “backup” PC in another room had zero k of free space on C:, and fixing that, fixed the problem seen everywhere else.

In this case, the problem was caused by an insanely large log file for the “security toolbar” component of AVG Free 2012.  Now I always avoid installing “toolbars”, which are (nearly?) always useless things inflicted to serve the software vendorsinterests rather than those of the user, but updates (obligatory for a resident av) may reassert them.

A lot can go wrong with log files.  They’re typically opened and closed for every write, and are written to often, so that there’s a log up to a point of failure that could lose pending file writes due to a crash.  There may be unexpected overhead imposed for each time the file is opened and closed, which can make things even more brittle, and there’s often no sanity-checking on log file size, so a crash-log-repeat loop can get really ugly, real fast.

AVG’s prone to this sort of nonsense; in addition to large wads of update material, partially held in non-obvious places (MFAData, ProgramData subtrees, etc.) it can also spawn gigabytes in Dump files.  This is the second time I’ve seen an out of control AVG log file taking every available byte of space, and it’s annoying when this is due to an unwanted “toolbar” component that should not even be installed.  Without FAT32’s 4G maximum file size limit, this “text log file” grew to 5.1G, leaving zero space free on C:, so that “touching the network” caused needs that could not be met.

22 August 2012

Flash Offline Installers

Adobe and Java rival each other as the world’s most exploited software, forcing us to swallow vendor-pushed updates for fear of attack.  Flash, .PDF and Java are all edge-facing in a big way; Flash and Java from web sites, and .PDF both via web content and emaul attackments.  Many software applications auto-generate .PDF files send as attachments with generic message text, so the recipient has no “Turing Test” opportunity to exclude malware-automated vs. legitware-automated .PDF coming from “someone they know”.

So with that in mind, you’d expect both vendors to be abjectly apologetic, going out of their way to make it easy for users to download and apply the constant stream of repairs for their defective code.  Which is more or less true for Java, but Adobe is another story – I found this blog post that sums it up best:


So the quest is on to find offline installers for Adobe Flash and Acrobat Reader, that are:

  • Really from Adobe, and not malware fear-bait fakes or trojanized versions
  • Actually up to date, and not older versions
  • Ideally, are free of unwanted by-catch (Google this, McAfee that, etc.)

With Acrobat Reader, this is fairly easy; you can use Adobe’s FTP site.  But that doesn’t help you with Flash.

I found some links from here to bypass the buggy installer...


Links from here appear to work...


These links were newest version (as at 22 August 2012)…





…while these were several versions old...





Expect these links to shift around, as Adobe plays the shell game to force us to use their wretched online installers, complete with shoveware that rewards them for our need to fix their junk.

21 August 2012

Live Writer vs. Blogger; Picture Uploads “Forbidden”

I’ve found it very useful to catch screen shots via PrintScreen key to Irfan View, or using camera with flash off and macro on if pre- or post-Windows, and pasting these into support emails. 

So what should be easier than to do the same thing here, in blog posts? 

When I last looked at this, it was a fiddly affair, requiring a separate host for the picture files etc. but surely in this pro-Cloud age, those issues should have gone by now.

Apparently not; when trying to publish from Live Writer to this Blogger blog, this failed with “Forbidden”, and remained so until all pictures were removed.

So then I waded through Blogger’s online editor to pull up the pictures into the post.  That worked, for very low values of “work”; pictures were blurry, and the Blogger editor stripped spacing between paragraphs (spacing is always a bit of a sore point with HTML).  What a mess!

Behind the scenes, Blogger stores “blog” pictures in Picasa on the web.  So I logged into that and uploaded the pictures I wanted to use there - then in Writer, I pasted in the picture links from a page where I’d navigated to the pictures I wanted to use.  Still a cumbersome and messy procedure, but at least the text formatting wasn’t screwed up and the pictures look reasonable (as they should; all I want is to show small pictures at original size).

LibreOffice 3.6 “The Selected JRE Is Defective”

Having got past some initial installation hassles that required deleting my LibreOffice profile, I hit a problem with Java, while in Tools, Options.  Here’s how to test this if it happens to you; go to the MediaWiki section in Options…

If you have the problem, you will get this error dialog:

I “fixed” this by installing the 32-bit Java JRE 6 update 33, being the current most updated version of the fading Java 6 line.  It has to be 32-bit as LibreOffuce is a 32-bit application (and fair enough), and it has to be Java JRE 6 rather than 7, because for practical purposes, LibreOffice 3.6 doesn’t work with modern Java JRE 7.5

There’s a lot of “UI pressure” at the Oracle site to download and use JRE 7 rather than JRE 6, which I took to mean 7 is fairly mature and 6’s days are numbered, so I recently switched from the 6 update 31 I was using, to the current 7 update 5.

There’s also a lot of detail on LibreOffice 3.6 and Java JRE 7, claiming that the new Java is supported, why Oracle’s poor installation practices get in the way, and how one overcomes this.

Originally, the LibreOffice code base started as Star Office, which was acquired by Sun and user as a poster child for Java.  This continued with OpenOffice, but since the developers left after the Oracle takeover, the intention is to dump Java.  I’d be very glad if they did, because:

  • LibreaOffice already loads faster than OpenOffice after reducing Java
  • Java is edge-facing, frequently exploited and frequently updated
  • LibreOffice lags behind effective support for latest Java updates
  • Java installs tend to leave exploitable older versions in place

The last is a very old issue that still hasn’t gone away completely.