28 April 2017

Windows 10: Microsoft's Update Strategies


This post is an expansion of a comment to this article, and is a big-picture look at how Microsoft is changing the version-SP-patch model through Windows 8 to 10.  This fits the usual "prototype sucks, revision rocks" tick-tock of Windows version popularity we saw with Windows 1000 to XP, Vista to 7, and 8 through to 10.  I hope we never again see a "free" upgrade offered only through the Microsoft Store as we did with Windows 8.1, which is still impossible to safely find as a complete offline installer to upgrade from the Windows 8 RTM ghetto.

The scalability problem


The reality is, there are too many code defects that are too promptly exploited, for us to keep up with on a manual basis - and this mirrors what we see with malware.  Remember those detailed articles on individually-named viruses?  These days it's "Trojan.Downloader.A6FE320P", there are too many to keep up with with that level of attention, and our defenses have changed from expert av vendor analysis, to crowd-sourced reputation; more but lower-quality input opinions.

The process of defending against new threats, and exploiting new opportunities, is rather similar.  To defend, you compare pre- and post-infected states, accurately define the difference and code a removal tool, then roll that out as an av update.  To attack, you can compare pre- and post-patched states, work out the difference, and code to attack the unpatched state while that's still available in the wild.  You can also hold malware out of av's hands to delay defense, and you can find your own "zero day" exploit opportunities to attack, rather than the easier (automatable?) process of reverse-engineering patches.

The code quality problem isn't cured by removing commercial motivation; Linux shares the need for patches, and some horrible exploitable defects existed for years in Open Source code, despite the mythical mass of eyes to examine source.  That meant something when computer users were also computer coders, and the code was simple enough to read as source while the compiler slowly ground out the executable form that would crawl the globe from there via floppies - it fails to scale!

It's really hard to assess code for fitness for use, when emergency vendor-pushed patches change it faster than you can test.  Before, you'd read the source, or test the closed-source product, then roll it out - and from then on, the defense was "no-one changes the code except me".  That's pretty much dead in the water; you are forced to trust your software vendor, from the top of the trust stack ("do their objectives align with ours?") to the bottom ("are they competent enough to do what they intend to do without exploitable mistakes?")

So under these circumstances, I like what Microsoft is doing, as it re-defines the software update and upgrade landscape from the traditional costly/destructive versions, safer but large SPs, and sprawling mass of individual repairs.

Removing the update catch-up pain


I've just done a couple of Win10 upgrades to build 1703, and considering this is an in-place version upgrade followed by a patch catch-up, it was far simpler than the XP or 7 process of running through 7+ online passes of WU and gathering loose patches as handfuls of gravel..  Setup.exe from the Media Creation Tool file set, the latest Cumulative, and all done while still offline; very little to catch up (thus low Internet use) online thereafter.

You do still have to do clean-up afterwards; check settings, especially for new features that default to vendor-friendly behavior that you may find objectionable.   Expect unplugged peripherals to have vanished, and these to re-install themselves when first plugged in; other programs may behave as if freshly-installed.  You may have to repeat many of these cleanups, first-use prompts etc. for each user account, which is a good reason to avoid multiple user accounts.  You will be pushed towards using the new "Modern"/"Metro" apps, OneDrive, and automatically logging into an online Microsoft Account.  I covered the specifics for build 1703 here.

How good can you expect updates to be?


The basic concept of trusting patch quality is flawed, because these patches are written via the same process that was used to create the code so defect it needs to be patched - and patched so often, we can't keep up manually, and have to allow the vendor to shove so many patches we can barely count them all, let alone write unique documentation for each.

But wait, it gets worse... the original code was developed at a leisurely pace, was installed on a "clean slate", and constitutes a single version.  In contrast, patches are developed at a rush, and rolled out onto installations that have diverged due to existing application and patch loads - how well do you expect that to work, given the original "clean slate" batting average?

Now consider all the permutations of the OS that need to be tested and patched - ignoring hardware, drivers and other added software.  In the XP and Windows 7 eras, you'd have 2 or 3 supported OS versions, each with 1 to 3 SP levels,. and then a mesh of all the individual loose patches that may or may not be installed.  Each driver and application will also have its own mesh of such patches - leading to a factorial function of all this... and there's a reason why that function is called "shriek!"; even small input numbers spawn massive output values.

Who will win the arms race?


That makes this an arms-race between repairs and defenses, and exploits and malware - but here too, it gets worse.  The repairs have to work on all systems, and can only be distributed through legitimately-permitted channels.  The attacks just have to work some of the time, regardless of collateral damage, and can be spread through botnets of unwilling "servers".

So if it's an arms-race, who will win?  Those with the most resources, so you can see why there may be confidence in the US and Chinese software and hardware industries, for starters.  But while the unit processing power of the human mind remains fixed, that of processors continues to grow; even if attention switches from more power per processor to mobility and convenience, more effective aggregation of processors (something we can't improve as fast for meshing of minds) means the eventual winners will more likely be the AIs.

So, what MS is doing, makes sense


1:  One version of Windows going forwards, even if that means losing putative upgrade revenue.  The shorter supported lifespan of SP levels of "the same" OS creates a loophole that will shrink the version load to 1, once 8.x and 7 age off the Internet (Ha!) or at least out of Microsoft's obligation to test and fix.

2:  Clean new version upgrades.  Remember the advice to "always install clean, never one version over another"?  We now do that with every new build, but the process is more robust; staged roll-out guided by telemetry, installed apps are re-installed afresh (look at install dates in Control Panel, Programs and Features after a new build), and more reliable Undo via Windows.old... so now we see Microsoft matching the common Linux practice of new versions twice a year, for free, with the equivalent of LTS versions for those preferring a slower pace of change.

3:  Cumulative updates.  Finally!  Yes, the downside is lack of detailed control, the ability to install all but one or two particular fixes - but the upside is a vastly simplified mesh of grain-of-sand version creep.  Catch-up is as easy as "install the newest Cumulative", and these are available as complete offline installers, without the madness of the "Store only" "free" 8-to-8.1 upgrade.

4:  Business vs. Consumer track - and you get to choose, if you stump up the cash for Pro (which increases MS's support load due to that second version track, so the money is somewhat earned).

5:  Separate Security vs. Feature updates; originally as "fixes only" vs. "fixes plus features".  This announcement suggests a 3rd "features only" form that can be released earlier for testing, so that when Patch Tuesday comes, sysadmins can decide (on the basis of their own testing) whether to install "fixes only" or "fixes plus features".

What's happening here, is happening in the bigger pictures of system to network to cloud... we are re-abstracting from details dictated by technological fault lines, towards what we understand as the human concepts by which to assess and decide.

So, instead of "we want only KBx, y and z because they are critical, but we don't need a, b and c because they're just features", we can just grab the bag off the shelf labeled "critical fixes" and leave the one called "features" on the shelf.

Will this work?  


Perhaps, and I hope so.  By reducing the version sprawl, we may get better quality fixes that work more reliably across all systems.  By generating revenue from sources other than one-off version sales, Microsoft can better align revenue with the unforeseen major cost of repairs.  And if the re-abstraction works well enough for us to no longer care about individual patches, we'll have moved up a level, as we are hoping to do via virtualization and cross-server fail-over to reach the cloud, where we no longer care what system our stuff is actually being stored or run on.

I think we'll always have to worry about "technical" details like that, but the expanding complexity means we will ironically have to trust AIs to manage it all, especially in the real-time arms-race between exploit and repair.  We stopped manually routing our dial-up calls to BBS phone numbers long ago; eventually we may do the same when having out storage and processing done across an arbitrary mass of other people's computers.

Do I have to drink the Kool-Aid?


No, you don't.  You can still (as at April 2017) keep your stuff on your own system, be clear on where that system ends and the Internet begins, and remain functional offline.  But you will have to defend yourself and your system against ever-pushier vendors and UI pressure to Just" let them push updates, or add this "recommended 3rd-party software", or pipe your data to an "online service" you thought was locally-running software, or to have your data whisked off to their cloud service.

The last seems odd; it costs real money to run a cloud storage back-end, so why would every Tom, Dick and Harry (OS, system and component vendors, etc.) want to host your data for free?  Well, ask yourself; what's even better than having your software on user's systems where it can snoop and do stuff?  Having the user upload their stuff at their own cost, to your own servers, where you can snoop and fiddle unseen.  As to the cost of hosting this storage and processing, that can be farmed out to the lowest subcontractor, or if no pretense at legitimacy is needed, the cheapest botnet.

There will likely come a time when having an offline system will be viewed with the same suspicion as retaining firearms rather than entrusting your safety to law enforcement ("if you are innocent, you have nothing to fear") but frankly, I'm much more comfortable carrying my own computer than a gun.
 

27 April 2017

Blog Re-Edits?


Should one re-edit existing blog posts?  If so, should these be brought forward in the date order to the date of last edit?

The answer may depend on why you blog - if it's supposed to be date-sensitive, like a diary, then probably not.  But if you're using a blog as the quick-and-dirty way to create a web site, then as you add content, you may want to link to the new content from your "older" posts.  Such is the case with the "waking hour" content that kicked off from today's "layers of abstraction" post...

Layers of Abstraction


This could get long, and if it does, I'll break it up into separate posts, with this one being on what I mean by a "layer of abstraction".  Later I'll go onto whether these are artificial contrivances of convenience or hard entities, the nature of their boundaries, the truth or otherwise of "as above, so below", and whether these differences can be tracked back to a small number of meta-level inputs such as the "good or evil" duality.  I'll pepper all this with examples, each of which would be a field of its own, but here I'm interested in whatever common essence can be extracted.

-oOo-


We don't experience stuff as a continuum, but as mind-sized chunks that I'll refer to as "layers of abstraction".  Below and above these are things that may affect that layer, but which cannot be fully described or explained within that layer alone - bringing Goedel's Incompleteness Theorem to mind.

Example: The Evolving Infosphere


At the analog volts and seconds level, it's all about transistor design and the behavior of the electronic shells of different atoms, which in turn drills down to the unique behaviors of small integers - but we don't consider that depth; it's all about what shape and size of blocks of tainted materials to join together to make effective transistors and other components.

At the more familiar digital level, we abstract out all of the analog stuff; nuances of time and voltage are simplified down to x volts = off and y volts = on, and time is pixellated into clock pulses.  But this layer of abstraction is supported by the analog layer only as long as it successfully slews voltage between the "on" and "off" levels, within the timing of a clock pulse - when this fails, the digital layer of abstraction breaks down in ways that make no sense within digital logic.

We have then aggregated transistors into chips, so we no longer have to think about individual transistors; chips onto circuit boards so we no longer think about chips, boards into systems, systems into networks, and networks into The Internet.  When you integrate ourselves and our code as actors within this Internet, you can consider the whole as the infosphere, with its own dynamics of function that may emerge differently to the raw inputs of original human intentions, etc. [discuss: 100 marks]

Building the Infosphere


Our individual minds can only mentally handle a certain volume of complexity, and scaling up by pooling our minds only takes us so far - as well as adding extra wrinkles in imperfect communication between these minds, as well as differences within such minds that cause them to misunderstand each other, differ in objectives, etc.

So, as we've grown the infosphere, we've done so by attempting to simplify the previous abstraction layer to the point we can take it for granted, then build up the next layer.  The internal components of a computer system operate well enough at realistic clock speeds so we can ignore transistors and whether they're loose, within particular chips, or whether those chips on on the same board.

When networking works well enough, we can ignore which system a particular file is on - all systems can be blurred together to be considered as "the network".  Because the Internet and networks are built from the same TCP/IP materials, it's tempting to treat them as the same, ignoring a fundamental difference to our cost; entities on a network may trust each other, but those on the Internet should not!

A set of technologies allows us to melt the edges between systems and networks further; communications tolerably as fast and cheap as internal data flows and storage, effective virtualization of OSs as if they were applications, tolerably effective isolation via encryption, tolerably seamless load distribution and failover between systems and networks (where already, those two words approach interchangeability).  And so we have "the Cloud", which seagues undramatically into AI and The Singularity. [discuss: 10 marks - it's really not that big a deal]

Other Examples


Other examples of layers of abstraction are visible light within the full electromagnetic spectrum, rhythm and pitch within the range of sound frequencies, and the micro/macro/astro-scopic scales.  All of these are based on the limited focus of our senses, which we've artificially extended.

Another example; chemistry, with nuclear chemistry below and biochemistry above.  This is a tricky one, because the floor of this abstraction layer seems hard and natural (and interesting - we'll likely come back to that later if we further consider the uniqueness of small integers) while the ceiling is more a matter of our mental limitations, plus the chaotic way that new complexity emerges (and more on that later, too!).

Consider written language; at its base is a combination of two symbolic layers of characters, and the text that can be constructed from these.  It would be interesting to compare the information efficiency (simplistic metric; .zip archive size?) of a rich character set vs. longer words of simpler characters, which is similar to the RISC vs. CISC arguments of the 1980s.  That processor debate appeared to be QED in favor of Intel's CISC, but is now re-emerging with the rise of ARM at a time when our needs and capabilities have changed.

Number theory examples abound, and is probably the best place to test predictions of closure (Goedel) and emergence; real numbers, rationals, integers and so on, and the nature of the "infinities" as expressed within these number systems.


20 April 2017

Windows 10 Creators Edition 1703


Updates for Windows 10 are streamlined into monthly cumulative features and fixes, a smaller version of just fixes, and periodic new builds.  These new builds are similar to new versions of Windows, and come out about as often as new versions of popular Linux distros; they don't appear in the Catalog, but are provided as new OS installers instead.


Installing Build 1703

 

Since the GWX offer, we’ve become more comfortable with installing new versions of Windows over existing installations, something we’d have usually avoided in the past.  It’s still a brittle process, sometimes leaving the system tied up behind a black screen for hours on end when things go wrong.

If yours is one of those systems that gets lost in “do not switch off your computer” or black-screen space, then it’s best to install the upgrade more formally.

First, use the Media Creation Tool to create an .iso of the new Windows 10 installation disc for your system’s edition of Windows.  Do not click the in-your-face “Update now” button; scroll down to “Using the tool to create installation media…”, as what you want is the .iso to build a bootable DVD.

The tool is a small download, which will do the real work when you run it.  It will default to building an installer for your system, so if you want something different, UNcheck the relevant check box and then choose your edition and bits to taste.  The same installer will work for both Home and Pro editions, but if you have Single Language (i.e. you’d GWX’d from Windows 8.x SL) then you will need to select the specific edition for that, else your Activation key won’t work.

The tool will take ages to first download the material, then build it into the .iso, then clean up afterwards.  As downloaders go, it’s not slow; it’s just a lot of material!  Once downloaded, you can copy the contents of the .iso (either directly via 7-Zip, or after making the disc) to a permanent “source” subtree on the system to be upgraded.  You can also add the most recent Cumulative update from the Catalog web site (the link should open to April 2017 Cumulative; navigate from there if reading this in later months).

Second, clean up your C: partition via Disk Cleanup, and make a partition image backup of this as your Undo, should things go pear-shaped.  If your hard drive is set up as ”one big doomed NTFS C:” then this will be a capacity challenge, as will GPT partitioning, which reduces your choice of tools and undermines confidence that a “restore” will work.  I use Boot It New Generation (BING), an old free product from these guys; their newer Boot It Bare Metal is a lot naggier and less useful in free form, but does work with GPT partitioning.  It’s also a good idea to exclude malware and do other cleanups before you make this “undo” partition image.

Third, get off the Internet and all networks, stay off the Internet throughout the installation process, and when you see “check for updates” during the setup process, UNcheck that.  This will limit the installation to what is in your pre-downloaded source material, avoiding update-of-the-week surprises and the tar-pit effect of flaky Internet access and performance.

Fourth, run the Setup.exe for the new build, from an always-available location, e.g. local hard drive volume other than C:, that can be a long head-travel away but is always present.  That way, any future references to the installation source can properly resolve.

Fifth, after the new build is installed, run the pre-downloaded Cumulative if you have that, then check settings etc. before going online for the first time, and doing online updates.

Checking for Lost Settings

 

After a successful “feature” update, there’s likely to be new features set up with unsafe duhfault settings, so you need to check Settings in general, and Privacy in particular.  Expect to see new additions allowed to use the camera, mic, and run as background processes; fix to taste.

However, there are some unexpected lost settings, especially as Microsoft pushed their OneDrive cloud storage service.  What’s better than having your code on users’ systems that can snoop their stuff?  Having users spend their communications dime on sending you their stuff so you can play with it unseen on your servers… hence so many vendors pushing cloud storage offers.

This article shows the new install-time privacy summary options, but what this doesn’t tell you is that you’ll not only see this when updating an existing Windows 10 installation (at least as done by running the .iso file set’s Setup.exe from within Windows), but the settings will ignore what you’d previously set, and start off with “everything on” duhfaults.  So, watch that screen and make sure you scroll it down to check all settings anew.

Windows 10 may turn off System Protection by default, and installing the new Build 1703 disabled this although I’d previously enabled the setting.  My systems use MBR partitioning with shell folders relocated off NTFS C: to FAT32 logical volumes on an extended partition, and maybe this influences how Windows 10 treats this setting; the same may apply to mobile systems with puny flash storage that have to use mSD cards to extend “internal” storage in a similar way.  With System Protection disabled, you’ll lose not only Previous Versions of files stored on C:, but also System Restore.

If you’d turned off Live Tiles, you will find all of them turned back on after installing Build 1703.  You should also check the registry setting to kill Live Tiles (i.e. stop external sources from squirting content directly into “your” desktop UI), in case that was cleared:

[HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\PushNotifications]
"NoTileApplicationNotification"=dword:00000001

Settings to curb OneDrive are likely to be lost, so check those, as well as adding a setting to reduce UI spam that pushes the cloud storage service.  Expect unwanted UI popups to “just” set up OneDrive, some days after the 1703 upgrade; a fairly common vendor tactic that aims to catch the user after their tech has walked away after doing the upgrade.

[HKEY_CLASSES_ROOT\CLSID\{018D5C66-4533-4307-9B53-224DE2ED1FE6}]
"System.IsPinnedToNameSpaceTree"=dword:00000000

[HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{018D5C66-4533-4307-9B53-224DE2ED1FE6}]
"System.IsPinnedToNameSpaceTree"=dword:00000000

Yep, both of the above settings were enabled, after 1703, which re-enabled OneDrive integration into the shell.  If that’s not what you want, you need to re-assert those settings.

[HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Notifications\Settings\Microsoft.Explorer.Notification.{B2E2D052-B051-D751-3E74-F8D4290BD1BC}]
"Enabled"=dword:00000001

The above setting blocks OneDrive spam delivered as a “sync notification”, and is worth asserting, though you’ll prolly get ongoing UI pressure to “just” sign up a Microsoft online account and/or use OneDrive.  While you’re there, you may want to check these…

[HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Notifications\Settings]
"NOC_GLOBAL_SETTING_ALLOW_CRITICAL_TOASTS_ABOVE_LOCK"=dword:00000000
"NOC_GLOBAL_SETTING_ALLOW_TOASTS_ABOVE_LOCK"=dword:00000000

[HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Notifications\Settings\Windows.SystemToast.AutoPlay]
"Enabled"=dword:00000000

…which reduce info exposure on the locked side of the “lock” screen, and reduce AutoPlay risks when arbitrary external storage is detected by the shell.  For the latter “hello, Stuxnet” malware risk, I still use…

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer]
"NoDriveTypeAutoRun"=dword:000000df
"NoDriveAutoRun"=hex:ff,ff,ff,03

…to disable AutoRun and AutoPlay on basis of both device type and drive letter.  The latter setting is a bit field for drive letters, and you can edit to enable particular letters only.

If you prefer to disable Windows Scripting Host, you may find some of the settings will have been lost after Build 1703, so check these…

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Ole]
"EnableDCOM"="Y"
"EnableRemoteLaunch"="N"
"EnableRemoteConnect"="N"

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Script Host\Settings]
"ActiveDebugging"="1"
"UseWINSAFER"="1"
"Enabled"="0"
"IgnoreUserSettings"="0"

…as I found the last two were lost after 1703.

There’s prolly more side-effects and collateral damage that I’ve missed; feel free to add such tips via Comments!