This post is an expansion of a comment to this article, and is a big-picture look at how Microsoft is changing the version-SP-patch model through Windows 8 to 10. This fits the usual "prototype sucks, revision rocks" tick-tock of Windows version popularity we saw with Windows 1000 to XP, Vista to 7, and 8 through to 10. I hope we never again see a "free" upgrade offered only through the Microsoft Store as we did with Windows 8.1, which is still impossible to safely find as a complete offline installer to upgrade from the Windows 8 RTM ghetto.
The scalability problem
The reality is, there are too many code defects that are too promptly exploited, for us to keep up with on a manual basis - and this mirrors what we see with malware. Remember those detailed articles on individually-named viruses? These days it's "Trojan.Downloader.A6FE320P", there are too many to keep up with with that level of attention, and our defenses have changed from expert av vendor analysis, to crowd-sourced reputation; more but lower-quality input opinions.
The process of defending against new threats, and exploiting new opportunities, is rather similar. To defend, you compare pre- and post-infected states, accurately define the difference and code a removal tool, then roll that out as an av update. To attack, you can compare pre- and post-patched states, work out the difference, and code to attack the unpatched state while that's still available in the wild. You can also hold malware out of av's hands to delay defense, and you can find your own "zero day" exploit opportunities to attack, rather than the easier (automatable?) process of reverse-engineering patches.
The code quality problem isn't cured by removing commercial motivation; Linux shares the need for patches, and some horrible exploitable defects existed for years in Open Source code, despite the mythical mass of eyes to examine source. That meant something when computer users were also computer coders, and the code was simple enough to read as source while the compiler slowly ground out the executable form that would crawl the globe from there via floppies - it fails to scale!
It's really hard to assess code for fitness for use, when emergency vendor-pushed patches change it faster than you can test. Before, you'd read the source, or test the closed-source product, then roll it out - and from then on, the defense was "no-one changes the code except me". That's pretty much dead in the water; you are forced to trust your software vendor, from the top of the trust stack ("do their objectives align with ours?") to the bottom ("are they competent enough to do what they intend to do without exploitable mistakes?")
So under these circumstances, I like what Microsoft is doing, as it re-defines the software update and upgrade landscape from the traditional costly/destructive versions, safer but large SPs, and sprawling mass of individual repairs.
Removing the update catch-up pain
I've just done a couple of Win10 upgrades to build 1703, and considering this is an in-place version upgrade followed by a patch catch-up, it was far simpler than the XP or 7 process of running through 7+ online passes of WU and gathering loose patches as handfuls of gravel.. Setup.exe from the Media Creation Tool file set, the latest Cumulative, and all done while still offline; very little to catch up (thus low Internet use) online thereafter.
You do still have to do clean-up afterwards; check settings, especially for new features that default to vendor-friendly behavior that you may find objectionable. Expect unplugged peripherals to have vanished, and these to re-install themselves when first plugged in; other programs may behave as if freshly-installed. You may have to repeat many of these cleanups, first-use prompts etc. for each user account, which is a good reason to avoid multiple user accounts. You will be pushed towards using the new "Modern"/"Metro" apps, OneDrive, and automatically logging into an online Microsoft Account. I covered the specifics for build 1703 here.
How good can you expect updates to be?
The basic concept of trusting patch quality is flawed, because these patches are written via the same process that was used to create the code so defect it needs to be patched - and patched so often, we can't keep up manually, and have to allow the vendor to shove so many patches we can barely count them all, let alone write unique documentation for each.
But wait, it gets worse... the original code was developed at a leisurely pace, was installed on a "clean slate", and constitutes a single version. In contrast, patches are developed at a rush, and rolled out onto installations that have diverged due to existing application and patch loads - how well do you expect that to work, given the original "clean slate" batting average?
Now consider all the permutations of the OS that need to be tested and patched - ignoring hardware, drivers and other added software. In the XP and Windows 7 eras, you'd have 2 or 3 supported OS versions, each with 1 to 3 SP levels,. and then a mesh of all the individual loose patches that may or may not be installed. Each driver and application will also have its own mesh of such patches - leading to a factorial function of all this... and there's a reason why that function is called "shriek!"; even small input numbers spawn massive output values.
Who will win the arms race?
That makes this an arms-race between repairs and defenses, and exploits and malware - but here too, it gets worse. The repairs have to work on all systems, and can only be distributed through legitimately-permitted channels. The attacks just have to work some of the time, regardless of collateral damage, and can be spread through botnets of unwilling "servers".
So if it's an arms-race, who will win? Those with the most resources, so you can see why there may be confidence in the US and Chinese software and hardware industries, for starters. But while the unit processing power of the human mind remains fixed, that of processors continues to grow; even if attention switches from more power per processor to mobility and convenience, more effective aggregation of processors (something we can't improve as fast for meshing of minds) means the eventual winners will more likely be the AIs.
So, what MS is doing, makes sense
1: One version of Windows going forwards, even if that means losing putative upgrade revenue. The shorter supported lifespan of SP levels of "the same" OS creates a loophole that will shrink the version load to 1, once 8.x and 7 age off the Internet (Ha!) or at least out of Microsoft's obligation to test and fix.
2: Clean new version upgrades. Remember the advice to "always install clean, never one version over another"? We now do that with every new build, but the process is more robust; staged roll-out guided by telemetry, installed apps are re-installed afresh (look at install dates in Control Panel, Programs and Features after a new build), and more reliable Undo via Windows.old... so now we see Microsoft matching the common Linux practice of new versions twice a year, for free, with the equivalent of LTS versions for those preferring a slower pace of change.
3: Cumulative updates. Finally! Yes, the downside is lack of detailed control, the ability to install all but one or two particular fixes - but the upside is a vastly simplified mesh of grain-of-sand version creep. Catch-up is as easy as "install the newest Cumulative", and these are available as complete offline installers, without the madness of the "Store only" "free" 8-to-8.1 upgrade.
4: Business vs. Consumer track - and you get to choose, if you stump up the cash for Pro (which increases MS's support load due to that second version track, so the money is somewhat earned).
5: Separate Security vs. Feature updates; originally as "fixes only" vs. "fixes plus features". This announcement suggests a 3rd "features only" form that can be released earlier for testing, so that when Patch Tuesday comes, sysadmins can decide (on the basis of their own testing) whether to install "fixes only" or "fixes plus features".
What's happening here, is happening in the bigger pictures of system to network to cloud... we are re-abstracting from details dictated by technological fault lines, towards what we understand as the human concepts by which to assess and decide.
So, instead of "we want only KBx, y and z because they are critical, but we don't need a, b and c because they're just features", we can just grab the bag off the shelf labeled "critical fixes" and leave the one called "features" on the shelf.
Will this work?
Perhaps, and I hope so. By reducing the version sprawl, we may get better quality fixes that work more reliably across all systems. By generating revenue from sources other than one-off version sales, Microsoft can better align revenue with the unforeseen major cost of repairs. And if the re-abstraction works well enough for us to no longer care about individual patches, we'll have moved up a level, as we are hoping to do via virtualization and cross-server fail-over to reach the cloud, where we no longer care what system our stuff is actually being stored or run on.
I think we'll always have to worry about "technical" details like that, but the expanding complexity means we will ironically have to trust AIs to manage it all, especially in the real-time arms-race between exploit and repair. We stopped manually routing our dial-up calls to BBS phone numbers long ago; eventually we may do the same when having out storage and processing done across an arbitrary mass of other people's computers.
Do I have to drink the Kool-Aid?
No, you don't. You can still (as at April 2017) keep your stuff on your own system, be clear on where that system ends and the Internet begins, and remain functional offline. But you will have to defend yourself and your system against ever-pushier vendors and UI pressure to Just" let them push updates, or add this "recommended 3rd-party software", or pipe your data to an "online service" you thought was locally-running software, or to have your data whisked off to their cloud service.
The last seems odd; it costs real money to run a cloud storage back-end, so why would every Tom, Dick and Harry (OS, system and component vendors, etc.) want to host your data for free? Well, ask yourself; what's even better than having your software on user's systems where it can snoop and do stuff? Having the user upload their stuff at their own cost, to your own servers, where you can snoop and fiddle unseen. As to the cost of hosting this storage and processing, that can be farmed out to the lowest subcontractor, or if no pretense at legitimacy is needed, the cheapest botnet.
There will likely come a time when having an offline system will be viewed with the same suspicion as retaining firearms rather than entrusting your safety to law enforcement ("if you are innocent, you have nothing to fear") but frankly, I'm much more comfortable carrying my own computer than a gun.
No comments:
Post a Comment