14 November 2005

So Long, Sony

Sony destroyed the last advantages legitimate content distribution had left.

Legitimate content usually costs more, and is often more restrictive about how it can be used; DRM is, after all, an artificial attempt to destroy the natural advantages of the digital age.

Plus it's more difficult to get. The media companies could have beaten peer-to-peer networks at their own game, by saying: "Here's the one place in the world where you can get direct access to every work by our artists that has ever been released, directly from a trusted, efficient server". But no, it's still "we own the rights to everything, but most of what you want we will not supply because we have decided to delete that from out catalog as not being financially worth our while".

So the only positive things left to say are: "Get your material from us, so you know you won't get attacked by viruses", or "Buy it from us, because it's the right thing to do".

Yet no virus writer has been able to escalate risk as severely as Sony, where something that is not even supposed to be a "computer disk" (thus representing the lowest expected risk possible) actually plants an open-for-exploit rootkit on the PC (which is about as high a risk as possible) - and then have the sheer arrogance to deliberately build that into mass-manufactured goods that folks pay for in good faith.

We aren't talking about some bad guy forging product, or intercepting and tampering with it, e.g. by injecting poison into over-the-counter headache pills and putting them back on the shelf. This is actually built into the product at the factory. We've never seen that level of evil before, when it comes to maliciously exploiting users via Social Engineering.

And Sony has no remorse; the response has been "What's the big deal? Most consumers wouldn't know what a 'rootkit' is, anyway". Well, folks who bought poisoned headache pills may not understand the biochemistry involved, but they do feel the pain.

So - if you want to listen to music without risking exposure to malware, buying a legitimate CD may now be the last thing you want to do. And when it comes to doing the "right thing", does it do to reward the most evil exploitation of trust consumerland has seen yet?

"Sony Is Not An IT Company"

I've watched some IT folks attempt to defend what Sony has done in various ways, and one of these is what I call the "forgive them, for they know not what they do" argument. But there's no good results down that road - either Sony is indeed ignorant of IT principles (in which case, should they be considered fit to stealth code into "data" products?). or they know exactly what they are doing, and thus clearly demonstrated they are unfit to be trusted.

Sony releases all manner of things under thier brand, including IT products. If they are happy to leverage the brand association, then they can't disassociate themselves from the fallout.

Would you buy or resell Sony DVD writers and other optical drives? Would you trust the software bundled with Sony digital cameras, media playing devices, etc.?

Trusted Computing can fail at any one of several layers, and it's no good worrying about the deepest of these (program code, hardware, etc.) if the topmost layer is blown away. Trusted Computing is going to be designed and built by entities who have proven they cannot be trusted, out of materials (code, even hardware) that are notoriously prone to insane behavior.

When audio CDs drop rootkits on PCs, "documents" auto-run macros, and JPEG image files run as raw code through some deep code bug, you don't have to look far to understand why we are scratching at the door to escape whenever we hear the term "Trusted Computing".

Moral: Never code anything bigger than your own head.

Calling All Activists

This Sony rootkit issue is a big storm in our little teacup of Information Technology, but every client I've spoken to, hasn't even heard about it - and this includes several folks who are normally quite socio-politically aware.

All too often, the activists leave the geeky stuff to us - and as techno-geeks, we leave the politics to the activists, politicians and lawyers. Then when they do try to legislate or regulate our technological world, we smugly point out how poorly they understand this world, and thus imply they are unfit to do so. Result: The engine's running full speed, but there's no hand on the rudder - is it any wonder the ship gets hijacked?

All over the world, societies and nations have balanced the need for creditors to recover debts, with the rights of the indebited. We generally do not allow creditors to send their goons to smash into your house and search it for what they accuse the debtor of having appropriated.

IT corporations are used to writing their own laws (EULAs and warranties basically exist to trump common law principles), and what Sony has done is a natural extension of this - though the sheer scale and arrogance beggars belief. They haven't released a virus that infects existing content, but built it into the product, and this malware runs roughshod over whatever laws might apply wherever that content goes.

And no-one has questioned their right to do this, instead muttering only about the methods involved. But step back and look at the big picture; Sony is "defending" a US$20 transaction, at the cost of your computer installation that's worth... what? The value's potentially unbounded; the PC may be worth US$500, but what you do on it could be worth far more; Sony doesn't care about the details, and would expect to be absolved of responsibility for any consequent damage.

The rights you save, may be your own.

25 October 2005

Comments Clean-up

My bad; I didn't set text verification in this blog's Settings, Comments. So after a while, I started getting a lot of attention from the wrong side of the Turing Test; bots, or humans that are bot-like enough for me to feel no guilt in deleting thier contributions.

I do suggest that setting should be on by default, in keeping with the "Safe by Default" concept!

14 October 2005

What I Did In My Holidays

Once again, long time no blog, so perhaps I should explain what I've been up to.

I'm an MVP for "Windows Client", i.e. the non-server Windows most of us use on a regular basis, and my usual turf are various public Microsoft and usenet newsgroups that cover versions of Windows from Windows 95 through to XP. And I've been pretty scarce there for a while too, and there are two reasons for this.

Firstly, these "main" forums have grown too large to keep up with, especially Windows XP General. It takes so long to process a gulp of posts that by the time I am done, and pull down the next gulp, large chunks of posts have aged off the news server and continuity is lost. I find that newsgroups don't scale well beyond around 500 posts a day; the global village becomes a city, with inner-city decay. So the future may lie in smaller forums such as those at www.aumha.org - the web interface scales even more poorly than newsgroups, but is fine for up to 50 posts a day per group.

Secondly, it was really getting quite depressing, having to tell posters again and again: "You need to do X, but you can't, because to do that you need Y, and you can't get Y because Z". My mindset was getting quite negative, and eventually I thought; "instead of telling these poor folks why they are screwed, why not devote some energy into changing Z and Y so that they will be less screwed in the future?" - and so that is what I did.

The Bart Project

The main values for Y has been a maintenance OS (mOS) for XP, and the Z has been Microsoft's failure to provide this. So I flighted the idea for a mOS elist among my MVP colleagues, and the next thing I knew, Susan Bradley had it up and running, along with a private Wiki!

My initial goal was to interact with Microsoft folks with an eye to freeing up access to MS Windows PE, which is currently moribund due to licensing constraints (with hardly anyone able to use it, who is going to develop for it?). What I discovered is that Bart PE has such a head start, fuelled by a strong groundswell of support that extends right up to software vendors, that the "real" Windows PE has become almost irrelevant as a maintenance OS.

So instead, I started pulling together what was available out there to create a useful mOS toolset. There are plenty of folks doing this already, as a Google(Bart PE) will tell you; my objectives were close but different enough to be worth the effort of starting from scratch.

The MVP Summit

I've been away for a while, attending the annual Global MVP Summit in Seattle. This was my first visit to the American continents, and it was great to finally meet so many folks that I'd been corresponding with for so many years! It was also interesting to see the Redmond campus and meet the folks who actually "make Windows", and have an opportunity to bend their ear a bit on various things I see as problems that need to be addressed.

I saw enough to be impressed with Microsoft's strengths, such as the ability to invest in the "long view" that creates an effective development environment (comfortable buildings surrounded by tranquil forest, great on-site catering... why would one ever want to go home?) as well as spot weaknesses that answer questions like "how can folks this smart make these stupid mistakes?". My first task on returning home, is to follow up with feedback on such issues, as well as continuing discussions started during live sessions during the event.

So I'll take a bit of a break from the Bart project, and perhaps stay out of circulation in the newsgroups for a while, until I've finished the post-Summit follow-ups. I'll probably blog here from time to time as I go along :-)

22 September 2005

If Government got S.M.A.R.T.

Your government cares deeply about your governance experience, and takes a strong line against corrupt polititians who impact on that experience. We looked to the thriving and successful IT industry for best practices to manage this problem, and what better to model this on than how the hard drives that hold your data are managed?

So from now on, every time a politician commits an offence, we will note this in our internal reacord-keeping system. After the 100th such offense, our management strategy will swoop into action.

The offending politician will then be barred from holding office within your state, and will exit our Witness Protection Program to serve in another state with a new identity and a clean record. After 100 transgressions in that state, the process will repeat, until retirement age is reached.

We see this as proof of our committment to good governance.

It's the least we can do.

The cover-up

Consider how many layers exist that will cover up for hard drive surface defects:
  • Internal hard drive firmware "fixes" on the fly
  • NTFS drivers "fixes" on the fly
  • AutoChk "fixes" automatically
  • Win98+; auto-Scandisk "fixes" automatically, no prompt
  • Win9x Scandisk accepts seconds-long per-sector retry loops as "OK"
But then, we have S.M.A.R.T. to tell us when the hard drive is about to fail... don't we?

Well, maybe and maybe not. Windows XP doesn't show any signs of S.M.A.R.T. awareness (certainly nothing as crass as some UI element you can click to query status, or Help in interpreting the results) although it's noted to fall back from aggressive Ultra DMA modes if "too many errors" are noted. BIOS can query S.M.A.R.T. as the PC boots up, but the CMOS duhfault is to disable this. S.M.A.R.T. has been around for years before XP; not sure what's taking so long there - the cynic in me says a desire to reduce support calls.

Hence the cottage industry in add-on S.M.A.R.T. utilities, either to monitor it in real time, or to query it on demand. The latter typically show full raw detail with no explanation, or a one-line result that is either "OK" or "call your hard drive vendor". Hard drive vendors often offer free diagnostics; quess which type of reporting you get?

How smart is S.M.A.R.T.?

Here's a case in point that prompted me to write this. I have a system in for troubleshooting, as it's been generally unreliable, no pattern involved. Motherboard capacitors are bad, so it goes off for repair; comes back OK and the testing begins with an overnight of RAM checking in MemTest86. That passes, so I Bart up and run HD Tune to look at the hard drive. S.M.A.R.T. says all is well, and the detail looks good; the drive temperature is fine, and doesn't increase alarmingly by the end of an uneventful surface check.

So I proceed on to my nascant "antivirus wizard", which is currently 5 different scans stapled together with log scooping (the "Bart Project" is another story for a looong day's blogging). I leave the system to carry on, and an hour or so later, it's clank-clank-clank. I power off (Bart's nice that way, you can do that if nothing's beeing written to disk) and proceed to data recovery.

On the nth attempt, the recovery PC starts up without POST dying in a sea of clanking retries, and I BING off C: OK (the data on this PC's already been saved off D: before taking it offsite, so salvaging the installation's the first remaining priority). But it dies a-clanking on the next soft restart on the way to what would have been file-level salvage of the huge E: and small F:

Well, S.M.A.R.T. certainly didn't see that one coming, and I can't see how I can change my SOP so as not to get caught out in that away again. Image every PC before powering it up?

Business as usual

What's more alarming, is what degree of grossly abnormal mileage is accepted as "normal", even by tools such as Scandisk that purport to assess such things properly. The best tool to pick this up is DOS Mode Scandisk surface scan, because it runs without any background processes that could cause innocent delays (processor overheating is the only false-positive delay factor) and it maintains a fine-grained cluster count as progress indicator.

When that counter pauses every now and then, or even stops for a second or so at a time, you should consider that hard drive as at-risk and evacuate it before doing anything else (and yes, that includes waiting for Scandisk to finish or stop on an explicit error). This mileage correlates to "every now and then my PC stops responding completely for seconds, with mouse pointer stuck, keystrokes ignored, and HD LED on" in Windows.

The significance is that Scandisk will carry on through these latencies, and even seconds of noisy retries, without reporting any errors at all. When an event that should take a fraction of a second is accepted as normal when it takes seconds to complete, you have to wonder how "awake" S.M.A.R.T. and other such "data sentries" are.

Note these Scandisk limitations:
  • Only for FATxx volumes
  • I'd consider it unsafe beyond 137G
  • Won't check surface until file system logic is "fixed"

Still, at least it prompts interactively on each error, before "fixing" it, unlike ChkDsk.

Understanding S.M.A.R.T. detail

My hunch is that SMART is something the hard drive industry reluctantly provided as a window into the closed world of internal defect management, as practiced by firmware within the hard drive itself. This may have been in response to OEM or other industry complaints.

Certainly, there's no effort to make S.M.A.R.T. information available to the end user in understandable form. I was pretty much in the dark myself, until I read the Help in one particular free S.M.A.R.T. reporting utility, which I'll link to shortly. It seems that the raw counts are subtracted from an initial "100%" or "255" value until the acceptable threshold is reached, at which point those "easy" tools will finally stop reporting "OK" and suggest you call your hard drive vendor. That threshold could be the 100th bad sector that had to be "fixed".

A simple S.M.A.R.T. reporter with Help that actually helps with the detail is here:


An excellent utility that shows S.M.A.R.T. detail, temperature, surface test and benchmarks is here:


Both of these get full marks in the Bart test, i.e. they operate as plugins from Bart PE CDR without the need to be run from writable disk, have registry stuff carried over, etc. Not only will these show you full detail, unlike many hard drive vendors' free downloads, but they will run regardless of what brand hard drive you happen to be testing.

Where do bad drives go?

Warranty replacement drives were once new, i.e. drawn from new stock; then they were "re-manufactured", or "refurbished", and the current language is "re-certified". I suspect that means blanking out the S.M.A.R.T. counters to fresh values, perhaps doing some testing and re-checking those values, then shipping as "OK". Certainly, I don't see hard drive repair gnomes in a clean room reassembling new platters and heads into old drives as a cost-effective way of "re-manufacturing" mass-produced hard drives.

If what I suspect is the case, then your warranty replacement hard drive could very well be the same drive I returned as defective. Perhaps I'll get your original drive as "re-certified"?

30 August 2005

When it all comes together...

Every once in a while, one has a case that illustrates the value of changes in default practice that one's made over the years. Here's one...

A system came in because Eudora had "lost all the mail".

Indeed; the entire "My Documents" object had been punched out; not in Recycle Bin either. Score is Murphy 1, Chris 0 so far.

Fortunately, this data set was on FATxx not NTFS, so the trail did not end there - I could go in with UnErase and DiskEdit to attempt recovery. So now the score is Murphy 1, Chris 1.

Normally, deleted data would be safer from overwrite than you'd expect, because I relocate data off C: (thus avoiding incessant temp, TIF, swap writes). Murphy 1, Chris 2. Plus I disable SR on D:, given that there's no core code there anyway, so that should avoid that source of spontaneous writes to (what could be at any time) at-risk disk. Murphy 1, Chris 3.

But this system had re-duhfaulted to turning on SR (with maximum disk use, of course) for all volumes, probably as a side-effect of disabling and re-enabling SR as a means of clearing it. So when I went in with my tools, I found the data set not only deleted, but also overwritten. Murphy 2, Chris 3.

Fortunately, the user had left the PC running one night a week, which meant my overnight auto-backup Task ran once a week. So I could go F:\BACKUP and choose the latest of the last 5 of such backups, and thus recover all data, even though the user has never explicitly initiated a backup in years. If the PC was running every night, perhaps they'd lose 1 instead of 7 days work, but even so, it's quite a win; Murphy 2, Chris 4.

Plus they are using Eudora for email, which separates it into malware-safe messages in mailboxes, and malware-risky attachments that can be stored somewhere else. Eudora doesn't run scripts in messages, and can be prevented from using IE's code to interpret them, so the messages really are malware-safe. So any data backup on a system I set up will automatically include the email stores; Murphy 2, Chris 5.

However, to restore this data, I'd have to overwrite whatever deleted data hadn't been destroyed already - Murphy 3, Chris 5. The client wants the PC back RSN, so what do I do; take an extra day searching raw disk for loose data, or restore their backup and close that door forever?

Fortunately, I can have my cake and eat it, because the volume I store data on is a tiny FAT16, 2G in size. So I can simply peel off the entire volume as 4 CDR-sized slabs of raw sectors, paste that onto another HD somewhere, and carry on doing deep recovery while the PC's back in the field and working on the data I restored. Murphy 3, Chris 6.

Security is not the only thing that is "a process"; the same could be said for working around dumb-ass vendor duhzign and duhfaults - and Murphy wins whenever the vendor's code discards rather than respects your choice of settings!

6 August 2005

Safe Fall-Through: Not...

I've just worked on a system with a BIOS that tried to get smart, and IMO got it horribly wrong.

This was a working PC that came in for a RAM upgrade, so I set boot order to not boot the HD at all (I want to avoid any chance that bad RAM might have to eat the installation) and booted a 1.44M MemTest86, which promptly rebooted. So I tried my CDR, which uses a different version, and that showed RAM errors. And so on, etc.

At some point, a boot POST phase had BIOS prompt me to enter Setup, as it had noticed several failed boot attempts, concluded all was not well, and thus had (irreversably) flushed all CMOS settings back to defaults.

What's wrong with that picture? Firstly, the notion that factory-set defaults are safe, when in fact they could be incompatible with bootability. Secondly, BIOS duhfaults are often self-serving or controversial, such as disabling S.M.A.R.T. monitoring of hard drives. Thirdly, this re-defaulted hard drive bootability, which is often a bad idea in such circumstances (and the reason why I had specifically prevented this). Fourthly, the change is irreversible and permanent, whereas the cause could have been unrelated and transient (such as my corrupted 1.44M diskette).

There's more, such as malware that could boot-fatigue its way to weaker BIOS-level defences such as blocked flash BIOS updates. I'm sure you can think of others, too.

Bad BIOS, no biscuit :-)

4 August 2005

Know Your Nemeses

One of the biggest benefits of theory is spotting inevitable pain points, before wasting resources on longer scenic routes that just bring you back to the same crunch later.

Another is to identify when it's insufficient to bet the farm on one of several parallel strategies, because these strategies do not fully encompass each other after all.

Let's apply these concepts to that old hobby-horse of mine - which just so happens to be one of consumerland IT's most common crises - management of active malware.

We know that malware can embed itself in the core code set, hold control so that other tasks can't start, detect system changes, and take punitive action. That's enough to predict that formal "look-don't-touch" detection scanning will be safe, but that informal detection scanning and formal clean-up may not be, and informal clean-up is even less likely to be. By "formal", I mean "without running any code from the infected system".

From that I conclude the only lasting SOP to detect malware safely is to do so formally, without leaving any detectable footprints in the system being scanned. I also conclude that one can't predict an always-safe SOP to clean active malware, so it's best to unlink the detection and cleanup phases of the operation so that off-system research on what has been detected can guide the cleanup process around any caveats that may apply.

Maintain or wipe?

This is one of several common bunfights that assume one of the two alterantives fully encompasses the other. With good enough maintenance, you'd never need to suffer the collateral damage of "just wipe, rebuild and restore". With good enough "backups", you'd never need to bother with malware identification or cleaning, and suffer the risk of thinking everything's been cleaned when it has not.

One can point out that circumstances may force one approach or the other, and thus no matter how well you develop one strategy, you cannot afford to abandon the other. Or that adopting a "wipe, rebuild and restore" strategy does not obviate the need to identify malware, in case it is in the "data" you restored or in case it's using an entry point that will be as open on the rebuilt system as it was on the originally-infected system.

Two further points arise from the above, when it comes to the thorny issue of backup theory. Firstly, we see that the pain point of distinguishing data from code is a nemesis that can't be avoided. Secondly, we see there's a classic backup conundrum of how to scope out unwanted changes you are restoring to avoid, when it comes to the "rebuild" part of "just wipe, rebuild and restore 'data'".

When code was expected to be a durable item, it was meaningful to speak of rebuilding the code base from a cast-in-stone boilerplate that dated from the software's initial release, and that is definately free of malware. Once you entertain the notion of "code of the day" patching, you cannot be sure your code base is new enough to be non-exploitable, and yet old enough not to contain malware that's been designed to stay hidden until payload time.

"Ship now, patch later" is another nemesis that won't go away - theory predicts that no matter how you design the system, you will always need bullet-proof code, just as no matter how you manage the system, you will always need to be able to safely identify malware. For example, how do you know your updates are not malware spoofing themselves as such? Whatever code applies that verification, has to be bullet-proof.

PS: Yes, I know how to spell "nemesis", singular.
You didn't seriously think you'd have only one, did you?

28 July 2005

The Joy of NirSoft

The unfortunate thing about NirSoft is that whoever's sitting on the ".com" version of http://www.nirsoft.net is a tad creepy - something that I'm sure impacts adversely on NirSoft's rep. The "real" NirSoft is home to the most beautifully detailed integration management tools, which work well within the Bart environment (more on that later, heh).

Many of these tools not only list the integrations they watch, but allow reversable management of them as well. This is very useful when troubleshooting issues like "why does my PC dial out when I list files in My Computer?" or "why is it so slow to list files?".

Now there are tools that are great to have around, and then there are tools that you use so often they become as familiar as your favorite ballpoint. Then there are tools you've had for ages, but only recently started using, as you hadn't really thought of them before.

Some tools that I'm using a lot at the moment are:

NirSoft Registry Scanner, which searches the registry for something and then lists all the occurances, each of which jumps into RegEdit at that particular item. One gets so used to Start, Run, RegEdit and then search, that it's hard to switch that habit to running this instead, but it's worth it.

Sure, searching the registry takes the same time either way, but it's easier to double-track jobs if all that overhead is imposed in one go, rather than between each "next", and it's easier to keep track of what you're doing if you can quickly edit each occurrance one after the other. Also, seeing an overview of what's found can change your strategy; it's obvious when you've cross-searched something you didn't want to find, or when you're about to bite into the tiger's tail (say, "330 items found" - do you really want to eyeball them all?).

Paraglider's RegShot takes snapshots of the registry, compares them, and provides a .TXT log that can be trimmed into a .REG that will apply the differences. It's from http://www.paraglidernc.com/RegShot.html and is a modified continuation of the work of another (as credited on the page).

Now I've used this sort of tool long ago in the Win95 era, and didn't find it useful; I either got too much "noise" (fluff settings such as which desktop icon was moved 2 pixels to the left, etc.) or an incomplete result that left out half the changes I'd need to apply. But whether it's a better tool, or improved registry operations in XP, it's now an approach that works about 90% of the time, when you have an interactive setting that you wished you could automate via a .REG:
  1. Undo the changes you want, Apply
  2. Take the first registry snapshot
  3. Make the changes you want, Apply
  4. Take the second registry snapshot
  5. Compare, Save, edit to commented .REG, Save As
If the OS had this feature, we'd have less need of tools like this; but, etc.

Dependency Walker tracks what a particular code file depends on, and is from http://www.dependencywalker.com/

You know the drill; you scrape over some program, run it, and it says "needs blahblah.dll", so you Find the latest copy of that on the source system and drop it into the program's directory. Then you run it and it says it "needs blah27.dll", so you etc. This can go on for n iterations, during which point you start to wonder whether the effort is worth it.

Dependency Walker would have been a better approach there, because you'd have a better idea of what you're up against. Sure, you may have to repeat the process when added code libraries reveal their own dependency fan-outs, but at least you are seeing one level at a time, and not one item at a time. After a while, you'll get a feel for the relationships between code libraries and the issues associated with them, and that's good knowledge to have.

Paraglider's RunScanner is a fundamental brick in the process of building a maintenance OS; it allows programs to run as if a different set of registry hives were in effect. The full ramifications of that are Beyond The Scope Of This Post (or "left as an exercise for the reader") but what it does is facilitate the formal use of tools that assume they are being run "live" from the installation you are trying to formally approach.

By "formal", I mean by not running any of the code under scrutiny; one of the most basic tenants of tackling malware, right up there with "don't write to the drive" for data recovery. Sure, most of today's approaches to malware are informal; they also don't always work (as in "Blah-Blah AntiVirus for Windows says it found XYZ, but can't clean it!"). The relationship between these two points is not coincidental.

NirSoft's Windows Update and JRE Listers fulfill a common need; to quickly assess the patch status of a system. This is particularly important with Sun's Java Runtime Engines, given that Sun doesn't remove old engines when installing new ones - and yes, the old dumb-ass exploitable engine is still left available to be (ab)used by malware.

Speaking of "code of the day", it seems as if Firefox is discovering the pain of patching isn't a Microsoft thing. There's been a new revision of Firefox most months, which looks a lot like Microsoft's monthly patch release cycle, until a longish spell between 1.0.4 and 1.0.5 - then 1.0.5 was almost immediately by "antidote" 1.0.6, all of which suggests that a certain degree of complexity plus edge expose will inevitably lead to code churnover and quality issues.

7 July 2005

Writing a Decent Application, Part 2

I'm back, on a different PC as the old one's mobo died. There's always hidden impact when one swaps PCs or "just" re-installs, such as lost passwords, bookmarks etc. that were scattered all over MS's messy user profile subtree. So it goes... also, this article is one I found tedious to write, having written the same sort of thing so often before. Once it's done and out of the way, I can get on to more interesting stuff that's come up since!

For software to be "safe", it must behave consistently with the level of risk that the user was expecting to have undertaken (or avoided). If software is not safe, then it no longer represents the will of the user, and therefore it's not secure - because even if you know who the user is, you are not getting the user behavior your organization expected.

Don't take risks on behalf of the user

Software that acts ahead of user intent, has to bear full responsibility for whatever follows as a result of that action. Examples abound; autorunning scripts in arbitrary "documents" or unsolicited email "messages", autorunning CDs as they are inserted, autorunning HTML scripts when "opening" a directory on the hard drive, exposing RPC services to the Internet, creating and exposing hidden "admin" network shares, "touching" arbitrary files on the hard drive as part of background services or persistent handlers, etc.

Display risk information in terms the user understands

Users understand data vs. program, view/read vs. run, Internet vs. my own computer files. Use this level of concept, with a "More information..." button leading to background and technical details.

Pitching this information in "your" language, such as corporate IT-speak of user accounts and so on, or raw tech detail such as file name extensions, helps some folks while alienating others.

Dumbing-down the language so that risk info is lost - hidden file name extensions, the generic "open" concept, blurring data vs. program behavior, displaying the local PC's content as if it were a seamless part of the internet - helps absolutely no-one. Stop doing that, please!

Be bound by the risk information you displayed

If a file is displayed to the user as "ReadMe.rtf" and it's internally a Word .doc with autorunning macros, do not assume an "honest mistake" and take the higher risk of running those macros.

If a file is displayed to the user as a safe-ish file type, but your generic "open" code sees an MZ marker hidden inside that indicates raw code, do not assume an "honest mistake" and run as raw code. The same goes for raw code within .pif and .bat files; if these are not truly .pif or .bat, then generate an appropriate error and abort. Yes, this will break those poorly-written apps, and force them to be fixed.

That is entirely appropriate Darwinian filtering - bad apps must die! We use settings like "Options Explicit" to trap bad code before it's released, while it is cheaper to fix; apply safety sanity checks to trap bad code after release, to limit its market success and spread.

Do not allow content to mis-represent itself

Once again, examples abound. For example, we are supposed to forget about file name extensions and trust icons instead - yet the most dangerous file types of all (.exe, .pif) can set whatever icon they like, and thus spoof any "safe" file type.

For another example, consider pop-ups spawned by web sites that look like internal system dialog boxes. Consider how the "cancel" or [X] UI elements can be coded to execute the material, contrary to user expectations.

Allowing arbitrary web sites to run code on visitors' PCs (and thus "own" them, in terms of Microsoft's "Rule #1" security mantra) is terminally stupid. It will be painful to stop doing that, because the Internet's web developers have grown to depend on the ability to poach user resources and interact deceptively or coercively with them. Pick that fight, and win it.

XP's SP2 was a step in the right direction in beating a retreat from "Ieeeee! Fore!!" stupidity ("all the world's a web page, and we are but icons in the clickstream" or "the network is the computer - so do you feel like troubleshooting a million-processor hydra you can't even access?"). If we can't get the wolves back into Pandora's Box, then (as an industry) at least have the decency to admit you screwed up big time, akin to the wasted years of trying to fly by flapping arms, or cure infectious diseases via leeches or holes drilled through the skull - and listen to what we are trying to tell you. Sometimes the answer is "nay"; don't shoot the messenger for saying so.

28 May 2005

Today's Link...

...is one from before...


...repeated as the last couple of posts have been brilliant, IMO. I'd offer to bear her children, were it for a certain biological escape clause :-)

23 May 2005

Writing a Decent Application, Part 1

Heh, this is where a non-coder (well, OK; ex-coder) sounds off on programming. File under "teaching grandma to suck eggs" if you like, unless you find yourself asking "I make what I think is pretty good software, yet my clients shout at me and go off and use something else, and I can't figure out why".

Features may make folks warm to a product, but there are two things that make them hate a product and never want to use it again:

1) It corrupts or loses data

2) It acts beyond the user's intention

Sure, "difficult to use" issues may cause folks to bounce off a product, but nothing else engenders pure hatred as the above two crises will do. Today's post goes about (1).

User Data

User data is the only component that cannot be replaced by throwing money at the problem. It's unique to the user, and should be treated with the utmost respect. User preferences and settings should be included within this umbrella of care.

If your application creates and manages user data, then you have to ensure that the full infrastructure exists to manage that data, including backup, transfer between systems, integration into other data sets, and recovery or repair.

The easiest way to do that is to use a generic data structure that already enjoys such infrastructure, to ensure the location of the data can be controlled by the user, and that the data set is free of infectable code, version-specific program material, or other bloat. This way, the user can locate the data within their existing data set and it will be backed up with that.

Data Survivability

If you have to create a proprietary binary file structure, then it's best (from a recovery perspective) to document this structure so that raw binary repair or salvage is possible. When the app handles the data, it should sanity-check the structure and fail gracefully if need be, with useful error messages. It's particularly important not to allow malformed data to overrun buffers or get other opportunities to act as code.

Large, slowly-growing files pose the largest risk for fragmentation, long critical periods during updates, and corruption. Don't stick everything in one huge file, such as a .PST, so that if this file blinks, all data is lost. It's also helpful to avoid file and folder names with the first 6 characters in common (as these generate ambiguous 8.3 names) and deeply-nested folders. Ask yourself what you'd like to see if you had to do raw disk data recovery, and be guided by that.

Data Portability

When it comes to data portability and integration, this works best if you avoid version dependencies and "special" names. For example, an email app that has "special" structure for InBox and OutBox is going to be a problem if one wants to drop these from another system, so they can be integrated into an existing data set. It should be possible to rename these so they don't overwrite what is there already, and have the application see them as ordinary extra mailboxes.

From a survivability perspective, it should be possible to manage the data from the OS, i.e. simply drop the files into place and the application will see and use them. If you fuss with closed-box indexing or data integration, then you're forced to provide your own special import and export tools, and things become very brittle if there is only one "special" code set that can manage the data.

Don't forget whose data it is - it's the users, not yours. Warn the user about consequences, but it is not your data to "own" in the sense that nothing else can touch it, and that anything outside the app that nudges the data should cause the app to be unable to use it.

Data Security

What makes for good survivability, may be bad for data security. If there's a need to keep data private, you may have to impact on data survivability to the point that you have to assume full responsibility for data management - something not to be taken lightly.

Data Safety

This is a different issue from security, and goes about limiting the behavior of data to the level of risk that is anticipated by the user. But that's another day's bloggery :-)

9 May 2005

I Live... Again

The inevitable first posting drought has been and is now gone, and so the immortal opening line from Blood (yes, I do think of computer games as a "literature" - but that's another topic) applies!

I needed to think about other things for a while, and so I did, though not the practical things I should have been attending to (e.g. getting paid). I had a night down south near Cape Point, where a wild genet popped into our bridge game to eat chicken with us, and that was the first 24 hours without mains electricity I've spent in several years.

And I read Paul Hoffman's "The Man Who Loved Only Numbers", and that made me think about a lot of things.


For example, there's Rudy Rucker's assertion that "information" constitutes a fourth "mind tool", up there with number/counting/measurement, space/geometry, and infinity/wholeness. The core thing I come away from Rucker with is this: If you were to accurately identify and reproduce every piece of information about some thing, would that thing be a true clone, and would it have the same consciousness? If it had consciousness, would it be the same one, or a new one?

The answers that came to mind were that for the new version to exist separately from the original, some information would have to change - i.e. it would have to be displaced from the original in space, time, or some other dimension, else it would be the original instance. And once spawned, different "life experience" will cause that instance to diverge from the common starting point, at least initially.

Is the path forward in time always one of divergence? I don't think so, but the initial ratio of sameness vs. difference leaves divergence as the only initial direction it can move in - unless there's some magic "pull" to nullify the single difference that defines this instance from the original. Intuitively, one feels that such closeness has to be weighted in one way or another, i.e. that there's attraction to "become one" or repulsion to become equidistant from the original as much as it is from everything else.

In practice, trying to identify all the information in an object is a bottomless pit. Just as an infinity is qualitatively different from very large, and behavior at the speed of light is different to what we attain as "very fast", so the behavior of near-identical instances may differ from crudely similar objects. When we work with similar things, they are as artificial as frictionless masses in physics experiments; magical objects that have no properties or content other than the information about them that we define. What is the number "10" in "real life", anyway?


This comes to a recurring concept that underlies my deep interest in computers. We understand most things in two ways; from the "top down", i.e. observing crudely visible behavior and delving down into detail, or "bottom up" by understanding the detail and building up to complexity. This is similar to the difference between bridge and chess, and may explain why few folks are equally good at both. You can also think of this as a core axis within politics; seeing the world in terms of available resources (the cards in a bridge deal) or what is needed (the end-point goal of chess). These seem closest to "right" and "left" perspectives.

It's rare that we approach things from the bottom up, which is the perspective of the creator rather than the created. Or rather, when we create things, they rarely get complex enough to be interesting.

For example, if we build an axe out of wood and iron, most of the characteristics are those of the natural constituents, with the "value added" by our creation of the axe creating little new to study and ponder about.

There's a fundamental shift that happened when humans began to compress time. It's one thing to visualize a sequence of visible events, and reproduce these in the same time scale - e.g. from throwing a stone to building a catapult that throws a stone for you. It's another to build something that does things faster than you can observe, e.g. the way an internal combustion engine whirrs along at 5000 RPM. In essence, a computer is a device that compresses time, so that it can act as a brain proxy fast enough to "think" inside small time scales, e.g. processing sound waves as the waves curve their way along.

This stretches what we would normally consider "natural" science. The behavior of elecrons passing through a transistor gate; is this natural science, given that transistors don't occur in nature? Yes, in that it is underpinned by the same deeper laws that creating a transistor merely makes visible. Perhaps science is a matter of creating new things to improve visibility?

Computers are complex enough to be interesting. We can no longer deterministically predict what will happen based purely on the initial conditions we started with. That observation is less profound than it sounds, if you consider Turing's Halting Problem; it's just a restatement that you can't solve a heavyweight computational problem with a lightweight computer.

Or is it? Does a computer generate only the rational results of its computations, or does it escape the system as Goedel might predict? If the latter, then does the computational power plus entropic effects constitute a "stronger" computer than the original design, and therefore make it impossible for a computer to predict its own end state?

One answer may be found by asking the question: Can the state of a computer be captured, projected onto another computer, and result in identical future behavior in the two systems? If not, why not? This is similar to my post-Rucker musings mentioned above.

Well, you can express the contents of RAM or hard drive as a large integer, by considering its entire address space as a single binary number. If you capture all such numbers, i.e. RAM, hard drive, CMOS and PnP NVRAM, SVGA memory, processor and other DSP registers, RTC etc. you can claim to have captured the entire digital layer of abstraction. If you trust the underlying analog layer of abstraction to properly support the digital layer without error, then you should be off to a flying start.

The trouble is, even if you test the hardware for billions of operations to determine how well the analog layer supports the digital, your confidence in future reliability can only tend towards unity. In fact, test too long, and every system will burn out - by definition, that is "testing to destruction"! There's something almost Heisenberg about the truism that you cannot test the reliability of an object without destroying it, and perhaps it's a restatement of identity; that identity precludes two things from being identical, and still being two (different) things.

Number theory

You can think of the digital layer of abstraction as akin to integers, while the underlying analog world of voltages, wires and milliseconds is akin to real numbers. I'm about to swerve off the road at this point, so get ready to change gears...

Perhaps we don't really understand rational numbers, let alone real ones; we think of them as "integers plus fractions". They aren't; they are simply what they are. Integers are no more a special case than, say, fractions which have 455 as the divisor, except for one thing that defines rational numbers; every series based on any divisor will include all the integers.

So perhaps the rational numbers are merely one of several special-case subsets of real numbers, depending on what you want to choose. What makes some real numbers irrational is that they fail the test of including all integers. So maybe we need to "get real".

We intuitively consider real numbers with integers overlaid, and I think we may get blinkered there. In fact, concepts such as "order" and "entropy" may be nothing more than artefacts that result from this reality distortion. Conceptualizations as ugly as 10-dimensional string theory make me want to step back and look at our mind tools, starting here.

As a mind experiment, consider that integers are just arbitrary values within the real number set. If there is an infinite number of real values between one integer and the next, there may not even be any special ordinal significance to them; perhaps we can choose some other series as a way of visualizing the scaling up of successive real number values.

We've already found this helpful, in that certain things snap into focus (and elegance) only when a different scaling is used, e.g. logarithmic. We've more or less been forced to abandon the comfort of integers when considering sound (decibels), earthquakes (the Richter scale) etc.

Now look back at integers from a chaos theory perspective. As I understand it, chaos theory shakes up the notion of things always proceding from complexity to... well, that's the point; often chaos isn't the featureless grey sludge you'd expect; new complexity (order?) may arise.

At first blush, integers are all alike, differeng only in the distance between each other on the number line. But in fact, there's a wealth of complexity that develops fairly rapidly, when you consider primes, and all the other patterns and series that weave through the integers.

Then we find that certain universal constants, such as e or pi, are not only not integers, but are not even rational. Could we have been using the wrong yardstick all along? What would the universe look like if visualized on an axis scaled by the prime number series?

Order and entropy

I love the concept of entropy, but I'm beginning to wonder what it is. We usually think of entropy as that which tends to disorder, but what is order? Is a glass vase more "ordered" than a very specific collection of glass shards? Or does the order lie in specificity, and can entropy be redefined as a drift from that specificity? Is the glass vase just a particular real number that happens to be an integer, but in all real respects just as ordered as a particular collection of shards? Is that collection of shards a particular real number that happens not to be an integer?

For that matter, could the dropping of a vase to produce those specific shards be considered as a highly specialized manufacturing process - more so than the making of the vase itself, if a lower degree of specificity defines the vase as what it must be to be that vase?

I think it may be useful to look at this by asking a seemingly unrelated question: What is the relationship, if any, between information (or "order") and energy? The more computers compress time, the more energy management becomes a problem.

Perhaps order is specificity, and specificity is energy. Perhaps the universe is an infinity of the unspecified, from which things come into existence by specifying them, and that the process of specifying things is an "uphill" (counter-entropic) one that requires energy. Perhaps mass is simply the most familiar form of specificity, or existence.

Well, that's enough for one post; I'll stop here for now :-)

26 April 2005

Malware: Defending the difference

As at April 2005, we see malware as being of two different types:
  • Traditional malware (worms, viruses, trojans) that have unbounded malicious potential, and which should be tackled formally (i.e. without running the OS they infected)

  • Commercial malware (spyware, adware, dialers, various revenue-redirection scams) that have to curb abusive behavior so their creators can plausibly deny malware status, and which are thus safe to tackle from within the infected OS
This difference is maintained only through legal challenge; it is not a boundary that can be defended technologically. And this is where we are asleep at the wheel.

Currently, several commercial malware push the envelope:
  • Clickless attack through software defects, e.g. Java exploits
  • Active in Safe Mode
  • Resist termination of in-memory threads
  • Resist or DoS anti-malware removal tools
We have yet to see destructive payloads or peer-to-peer spread, but in most other respects, the boundary is blurring and the time is near when we will need formal tools to clean up commercial malware. We are ill-prepared even for traditional malware; the de facto maintainance OS for NTFS-bound XP is a free download that could vanish in a fit of vendor licensing pique, and av tools that run on this are rare and costly, reflecting the FUD and financial risk that developers must face here. There are no mOS-ready scanners for commercial malware as yet.

As long as the legal climate allows vandalism in the name of commerce, we can expect the boundary between commercial and traditional malware to be poorly defended. As technologists, we should get our tools ready; the need may soon be at hand.

20 April 2005

LUA and the One Hand Rule

LUA stands for Lowest User Access (rights), and is the concept that in a world full of rampant malware, we should cower in a basement panic room rather than stride masterfully about the house with a vast array of weapons and power tools to hand.

Personally, I'd rather live in a "Home", i.e. a physical location where safety is assured. In the real world, I live in a house with thick walls, barred windows, and clearly-defined doorways that are locked. In the infosphere, I live in a "network client" that takes candy from strangers, so LUA has its charms until we can get the Home Operating System to "grow up".

Put it this way; if you were forced to live in the middle of an open football field, would you carry weapons and power tools with you at all times? Would you be able to fend off those who would use these against you, 24 hours a day? If not, you'd probably want to lock those valuable, dangerous things somewhere safe until you need them - and that's what LUA is about.

But there's a user acceptance problem; no-one wants to be less powerful, so we like the idea of can-do-anything administrator user account rights. Frankly, when it's out own home computer, we feel we should accept nothing less; we should be safe in our own homes.

The One Hand Rule

Folks who work with big electricity for a living know this safety dictum, and that is; at any given moment, you don't have both hands touching sparky metal stuff at the same time. A veteran electrician may instinctively put his left hand in his pocket as he reaches with the right, in deference to this rule.

The Internet is not a network, because it excludes none. If you like to think of it as a network because it is built out of networking technologies, then consider it the mother of all infected networks that can never be cleaned. Also, try not to think of furniture as trees, just because both are made of wood!

So the "One Hand Rule" for computers is; never have one hand in the Internet while the other has a power tool or destructive weapon in it. This is the key to breaking the "Everyone Loves Admin" deadlock; make the administrator account a drab workplace where no fun abounds and only administrative work can be done. After all, Safe Mode lets you do "more stuff", yet you don't see users wanting to run in Safe Mode all the time. A game that would only run in Safe Mode wouldn't sell, yet most games that require admin rights sell just fine.

The Janitor Account

I'd combine a malware-safer Safe Mode with strong admin rights, as the only place where strong admin rights can be applied. Just as we expect weilders of power tools to be clear-sighted, sober, and knowledgeable, so we should expect the Janitor account user to be undistracted by dangerous fluff such as rich media, and up to speed with a no-frills user interface that shows things as they are; no self-defined icons, persistent handlers, custom screen savers, hiding of dangerous files and so on.

The reason is not simply to punish the user for being in the Janitor account - it has to do with safety. Hiding file name extensions, files and paths hides risk-relevant information that a wielder of power tools needs to know. Normally, you don't care where the mains wiring runs within the walls; you'd rather look at the wallpaper. But if you are drilling holes in the walls, then you need full access to that risk-relevant information.

The other safety aspect is that whenever the system "reaches ahead" of the user, dipping into files to show you custom icons or do other persisntent handler stuff, it exposes a potentially-exploitable risk surface to that material - material that you have as yet indicated no intention to handle or assume safe. I might choose to list files that I know are dangerous, in order to delete them; I do not want the system running content within these files before I can do so, as a misguided "service" to me.

For the same reason, the Janitor account wouldn't run custon screen savers or offer any other automated running of arbitrary software. You don't want arbitrary software running with strong administration rights, and while we remain blinkered into thinking of such rights as applying to everything a user does during that login, these things have to go when such rights are in effect.

15 April 2005

Reclaiming Your PC

If Microsoft's security Rule #1 is "If a bad guy can run code on your system, it's not your system anymore" is as true as it is, then it beats me why we are so eager to allow web sites, unsolicited email "message text" and "data documents" to own our systems.

By design, each of these were left to automatically run scripts, which can escalate to raw code. Modern Windows and MS Office are less inclined to automatically run scripts in "messages" and "documents", but web sites and even media files are still getting unwanted traction on our PCs.

So, what do you do when it's "not your system anymore"? You try and get it back! A corollory to Rule #1 might be "If the bad guy's code is not running, you may be able to reclaim your PC", and that informs how I approach such matters.

I found a useful article on this topic here...


...though in some ways it differs from my own approach, which is (terse version)...

Clean the system:
  • Isolate the PC from all networks, i.e. LAN, Internet, Bluetooth, WiFi, IR
  • Formally scan for traditional malware, detect only, log results
  • Read up on the malware found, clean according to caveats (warnings)
  • Safe Mode Cmd Only scans for commercial malware
  • Read up on the malware found, clean according to caveats
  • Manually visualize integration points, manage what you find
  • Repeat for each user account
At this clean point:
  • Purge all web caches, set a sane cache size (e.g. 20M), for each user account
  • Purge Temp files
  • If system running OK, purge all System Restore, manage SR size
  • Create new System Restore point
  • Defrag file systems
  • Apply risk management settings
  • Apply malware wall-outs e.g. Spyware Blaster or similar
  • Set new baselines e.g. for HOSTS backups, etc.
  • Create new System Restore point
  • Make sure firewall is enabled / installed
  • Review LAN network shares; do NOT full-share any part of startup axis
  • Remove File and Print Sharing from unwanted "networks" (Internet, IR, WiFi etc.)
  • Create new System Restore point
Now re-enter the world:
  • Repeat cleaning process on all PCs on your LAN
  • Reconnect cleaned PCs to LAN
  • Reconnect to Internet
  • Get and apply patches
  • Create new System Restore point
Here's a cheery, simple and dated description of how to do a formal virus check, from the Win9x days when diskettes were the boot mainetnance standard and when DOS mode access was possible via the FATxx file system. If you're PC has no diskette drive and/or you are using post-FATxx file systems such as NTFS, then you'd have to do something different that meets the same criteria. Think bootable mOS CDRs and frequently-updated data on USB sticks.

13 April 2005

Tech instincts: Maintaining an undo trail

Apology to geeks: I'm writing this in the style of PC magazines, i.e. pages of friendly feel-good waffle with about one tech clue every 2 PgDn keystrokes :-)

I live in a house full of spiders, leading some visitors to believe that I'm interested in them. But they're there simply because I don't kill them, and I don't kill them because I have no particular reason to do so. Now that my intention is drawn to them, I have indeed found them interesting.

Steam and PCs don't mix, so the bathroom is the least cluttered room in the house. Spiders often fall in to the empty bath and I then have to scoop them out. Like ChkDsk, they have no "big picture" awareness and tend to do things that make the task a lot more difficult and risky than it need be.

But spiders are more clueful than ChkDsk ever will be. I noticed that the more panicky they get, the more web silk they chuck out as I chase them about with my piece of paper or whatever (I like spiders, but I'm not as stupid as ChkDsk either).

And that reminded me of a basic tech instinct; as soon as things look as if they may possibly get tough, maintain an undo trail! The spider's silk would allow it to dangle rather than plummet if it ever ran off the edge of something, giving it a wider range of options. Useful, even if you're a little critter with a terminal velocity of about 5 miles per hour.

12 April 2005

Would you trust this hard drive?

As a tech, I wouldn't.
But as a hard drive vendor, would I replace it under warranty?

Full story here.

9 April 2005

Red Flags: Spot 'em early...

I'm a firm believer in theory, as in meta-knowledge that lets you intuitively jump ahead over hours of logic. Intuitively? Well, it's probably because I lack intuition that I have become a fan of laboriously building theory as a structured replacement!

In this spirit, I offer you a few "red flag" indicator phrases...

Why would you want to do that?

The person who says this just does not "get" it. Rewind your argument to try once more to show why TSM ("this stuff matters") and if no joy, consider this person an unmovable rock you will have to flow around. Example:

"Autorunning macros in data files is powerful stuff - but what if someone were to write a macro that overwrote all the files in your root directory?"

' Why would you want to do that? '

People must...

If you hear this in what would otherwise be an enlightened political discussion, then beware; here come the stormtroopers!

"But what if folks don't want to work for the common good?"

' Well, the people must just do the right thing, I mean if ... '

And another thing!

If you catch yourself saying this in the course of an intra-relationship negotiation, you might just possibly be a nag - and a depressed one, at that. If every spark brings out a litany of unrelated complaints, then chances are there's some deeper structural problem that's locking you into a state of conflict and resentment. Do whatever you can to improve your position; the chances are that in so doing, your efforts will benefit more than just yourself.

I spotted myself saying this within my relationship with Microsoft, and am following my own advice!

We can't not install that, because...

"...our system design is bad", is the usual reason. Why would anyone want to not install something? ("Why would you want to do that?") Most likely because there's a risk to it, or some other cost (resources, maintenance committment, price tag) involved.

When the response is "we can't not install that" for some technical reason, then this implies the wrong code has been generalized across the wrong scopes, i.e. that your damage-control bulkheads are in the wrong place. This can be such a deeply-rooted problem you may be reluctant to re-engineer it, but trust me; it's going to hurt, and keep on hurting, until the need to maintain backward compatibility with this design has finally gone away.

Windows XP abounds with examples; Remote Procedure Call, hidden admin shares, the pervasiveness of HTMLwithin the OS, even the consequences of the Win95 decision to flaunt the new Long File Names feature when naming "Program Files". One of the big lessons of XP Service Pack 2 was how painful it is to rewind dangerous functionality later.

Do I really need to flesh this out? OK, one example; Remote Procedure Call. This exposes code to direct Internet access, and this code is non-trivial enough to be exploitable (Lovesan et al). Microsoft tells us that "if a bad guy can run code on your computer, it's not your computer anymore". By design, RPC facilitates this; by code defect, all the more so.

XP is NT, NT was designed as a network OS, and it treats the Internet as just another network. Because it's so tempting to flatten natural hard scopes (see previous blog entry), certain things such as RPC are rolled out to work seamlessly across networks, as if the local PC and network were all one system (as if they'd been smoking Sun's giggle weed).

So now you have a face-hugger dependency; you can't amputate RPC because the local system relies on it to manage itself. See the problem?

8 April 2005

Use hard scopes as natural cover

Let's pull a few unrelated concepts together...

What is possible is often delineated by hard natural scopes, and overcoming these is generally seen as the objective of progress. For example, it's pointless for me to apply for a job in Norway as I can't physically attend the workplace, but if the nature of my work can be transmitted as data, then that obstacle goes away, thanks to the Internet's role as a ubiquitous data conduit.

The Internet's been likened to the Wild West, in that without any overriding curbs on software behavior, objectives are pursued to the point of open warfare. So you're obliged to view the Internet as a virtual battlefield, as if all the bad neighborhoods in the world could suddenly wormhole their way right up to your front door.

Now when you plan your defences, you tend to take natural hard scopes for granted. If your house backs directly onto a mountain cliff, you don't fret about attacks through the mountain. If the PCs on your LAN are cabled together, you don't fret about other entities being on that LAN unless they get in from the Internet.

On the other hand, if you suddenly take those natural scopes away, you may find your traditional defenses have huge blind spots.

Thirty years ago, we would think purely in terms of physical safety. Today we think in terms of Internet threats as well. The two seem quite different; physical threats are localized, whereas Internet threats are anonymous and pose little physical risk.

We know Internet financial crimes such as identity theft on the increase, and it's also been noted that criminals formally convicted of physical economic crimes such as muggings, car thefts, house breaking etc. are switching to Internet crime via off-the-peg tools.

We also know that wireless networking needs a lot of attention to secure. Presenters assert this is indeed possible, if you have a few boxes to act as certificate and RADIUS servers, and you disable a bunch of things that are on by default, such as easy-to-exploit WEP.

I see a huge amount of consumer WiFi kit flying across dispatch counters; it seems like many folks automatically go WiFi at the same time as they go broadband. I have to wonder how many of these first-time home networks will have the faintest whiff of WiFi security in place.

Laptops are easily stolen, and new ones support WiFi out of the box. It's easy to cruise around looking for signal and hook in as part of the LAN, thus bypassing any Internet-facing defences, and combine the anonymity of the Internet with boy-next-door physical access. That's a scary combination, and not only for economically-motivated crime.

In physical battlegrounds, combitants haven't relied purely on personal body armour for a few centuries now. Kevlar notwithstanding, modern combitants make maximal use of natural cover, simply because it works better.

Computer game players know this too. Space Invaders players generally don't shoot away all the buildings to get a clearer shot at the bad guys; they preserve theseas cover and hide behind them. Players may use cheats to be able to walk through walls in Doom, but they sure don't use cheats to let the bad guys shoot through walls at them.

So perhaps we shouldn't be so quick to dissolve natural hard scopes that physically air-gap LANs from the outside world. We can never clean the Internet of malware - it is the mother of all infected networks - so all we can do is harden the edge against it. Hence the classic defensive strategy; put a NAT router and/or firewall between the Internet and our LAN. After all, the inside of the LAN is implicitly hard-scoped by where the cables go - as long as you don't go wireless.

6 April 2005

BING'd an XP to new C:, won't boot?

This is a "voodoo" tip, i.e. I know it works, but I can't explain why (oh sure, I can guess for days, but... I'd love to see "feedback 1" and read the answer!)

Here's what happens:
- use BING to copy an XP installation C: from one hard drive to another
- put the new hard drive into the XP installation's original PC
- diskette boot into DOS Mode
- FDisk /MBR to that standard MBR code in place [*1]
- FDisk option 2 to set primary C: as active to boot
- attempt boot into XP from hard drive
- fails with Disk Error
Now comes the "voodoo"...
- boot into BING, partition maintenance (no need to install BING)
- resize C: downwards a few megabytes
- attempt boot into XP from hard drive
- now this works fine
- boot into BING, partition maintenance (no need to install BING)
- resize C: back to original size
- attempt boot into XP from hard drive
- now this still works fine

[*1] FDisk /MBR caveats! This command replaces the Master Boot Record code with standard MBR code, overwriting whatever was there. That's bad in two situations:
  • Non-standard MBR code was required, e.g. boot manager, boot virus, DDO code a la Max Blast, EZ IDE, etc. to work around BIOS limitations

  • No valid 55AAh boot signature was present in the MBR; in such cases, FDisk /MBR will irreversably zero out the entire partition table, losing all partitions!
I've saved off the MBR and PBR and compared them; no differences other than the cached free space value etc. in 3rd sector of FAT32 PBR. So whatever BING is changing after a size nudge is within XP's file set or file system structure, and it's something that BING fails to do (or does wrong, perhaps different PC is a factor) when it did the partition copy.

The next part's also required, but isn't voodoo (as in the classic man, wife, secretary cliche; "I can explain everything" - but won't ):
- diskette boot to DOS mode
- Norton Disk Edit with writes enabled (we're about to "go boldly...")
- copy the pre-code bit from first sector of PBR
- paste this into the corresponding area of C:\BOOTSECT.DOS

Chances are if you're up to speed to do the above safely, you'll understand why you may need to do this if you want a formerly-working hard drive based Boot.ini-mediated DOS mode to work on the new hard drive. Repeat the same procedure to get a hard drive based Boot.ini-mediated Recovery Console to work, substituting C:\BOOTSECT.DAT for C:\BOOTSECT.DOS in the above example.

Today's Google: "this is where your sanity gives in, and love begins"

I've seen a video version of that which is built up entirely of scenes from the movie "Ghost in the Shell". If you've seen the movie, then the video brings it all back in 4 minutes... stunning!

Today's Bonus Google: "scuse me while I kiss this guy"

I wonder why HTML is so ^%$% useless with white space? What you see is hardly ever what you get, even within preview and view within the same program - all too often, contiguous white space is ripped out. This makes it difficult to space out multi-line bullet points, apply the convention of two spaces between sentences, and do ad-hoc bullet points.

I'd also like to know why this editor is so useless at applying a font consistently over a selection of text, and why all HTML editors fail to apply font choice to the numbers of numbered lists. The latter smells like another blind spot within HTML itself. Gah!

4 April 2005

Bug: Win9x Explorer.exe Dwaals, Recovers

I finally got it together to web this long-standing and annoying Win9x bug. To a newbie, it can look like reset button time, and that escalates the impact to file system damage, etc.!

Homage to Hauer

I was thinking about Blade Runner, and how much it was Rutger Hauer's film. The resonance of so many scenes, from the famous pre-death soliloquy to moments such as the revulsion when he first sees J F Sebastian's toys, is 100% Hauer. Acting a non-human with nascant humanity is difficult enough - and Hauer conveys this as effectively as his maker's explicit explanation of the problem, with a mixture of child-like emotions and decisive leadership - but that's just the baseline; Hauer makes it real, and makes us (or at least this viewer) care.

I thought it strange his career didn't take off at the point, but http://www.rutgerhauer.org/ implies he's done far more than Blade Runner followed by blockpopper action roles.

I love P K Dick, but I see Blade Runner as stronger than the "Do Androids Dream of Electric Sheep" story it was based on. As a movie must, it pares down the story, losing the Mercerism thread completely, and re-focuses it on what becomes the dominant theme. It's tragic that PKD is dead, but even more so that he never saw a film that might have changed his mind about "Hollywood". One wonders how much stronger Total Recall and the folgettable film rendition of his "Second Variety" story would have been with his hand guiding the rudder.

Today's tech content; two pages to start off a new "case stories" section at my site.

One is on an atypical presentation of the motherboard bad capacitor problem, and the other being a strange one-site problem with Eudora vs. "spool" files.

31 March 2005

Quote: On "DOS vs. Windows"

Norman L DeForest posted this response to considerations of "DOS vs. Windows" in the alt.comp.virus newsgroup today...

>The difference:

>DOS takes you between ten minutes and an hour to learn how to get it
>to do something. Windows takes weeks to learn how to get Windows to
>stop doing what you don't want it to do.

30 March 2005

Tip: New use for Bart's PE

Users of Windows XP may be familiar with Bart's PE, which is a free utility that builds a bootable CDR that can operate as a maintenance OS for stricken XP systems.

Like CDR-booted Linux, a Bart's PE CDR can support USB sticks, but only as long as they are present when the CDR boots up. It won't detect a USB stick inserted mid-session.

A less-obvious use of a Bart's PE CDR is as a means of trasferring material between older versions of Windows with poor native support for USB sticks, and such sticks. For example, let's say you carry around your latest updates and anti-malware tools on a USB stick, but are confronted with a Win98 PC in the field. Either you don't have your USB stick's driver CD-ROM with you, or you don't want to pollute that PC with drivers for a device it will never see again.

So you can insert the USB stick, then boot that PC off Bart's PE CDR. Using that OS's native support for USB sticks, you can do the transfers that way.

Today's links for Win98 / IE 6 users:



I may have more on the issue referred to in the first link in a day or few!

Today's remembered music albums are a beautifully depressing pair: Lou Reed's "Berlin", which I regard as a singular achievement that puts him up in the firmament even if he did nothing else, and Nico's "The End". Though what we played at tonight's Bridge was an album by The Mediaeval Baebes, as no-one felt like getting depressed :-)

29 March 2005

Wish: Telephone Messaging on XP

Telephone messaging seems to be the creature the world forgot - something that many of us need, but which isn't done very well. In fact, I'm still using Win98SE in deference to the crusty old Bitware 3.03 bundleware that seems the best of a bad lot.

Traditionally, this software has always been served up as bundled with the modem, so there's not a lot of incentive to create a killer phone app. I'd settle for being able to manage more than 100 messages without falling over, and the ability to seek, pause and play from arbitrary points in the message, much as one does with .WAV or .MP3 files; that doesn't seem too much to ask, given that some phone messaging apps already save the messages as .WAV files?

I wish MS would come up with something here... then again, if anyone knows of something in this line (preferably free), do let me know!

Today's fun link: http://www.dilbert.com

Today's fondly-remembered band: Jayne County and the Electric Chairs. Famous as Wayne County in the punk era for shockers like "Toilet Love", folks had wandered away by the time thier third album came out - which is a pity, because that one was a keeper.

25 March 2005

New web site content

A wish for Windows (all interactive settings dialogs to have a Manage button from which these settings can be imported/exported as a .REG), some photo-laden hardware maintenance topics, and the start of what will be the "Gallery of Contentious Assertions".

Today's biopsy to
Google: "draped across the piano with some surprise"

Today's fun link: http://topicdrift.blogspot.com

Please MS, can we have LESS?

My curious colleagues over in the pro-IT world ask for "less" all the time; less new features in Service Packs, less things left for the "managed" user to play with, and so on. Here's what I'd like less of, and I suspect I'll have supporters in both home and pro camps on these:

Less scrolling!

That means, make every dialog resizable and it would be nice to either start with a sensible size (Win9x era hint: Find often finds more than 5 items) or remember the size the user sets. XP has several badly-sized fixed dialogs, e.g. the one that lists detected hardware driver choices; you can't see whether "Fast-o-matic SVGA FT-5000 Series 1.04.00..." is "...05 Beta Do Not Use", "...04 Win98SE", "...04 XP" or "...04 XP Brazillian Portugese". Join the dots on what happens next.

Less underfootware!

Trust me: Unless your software lays golden eggs every 10 minutes, I do NOT want it running all the time underfoot. In fact, I'd prolly want to set that golden-egg-lating software to lay 100 eggs an hour between 03:00 and 06:00, thanks. And yes, this definitely means no background indexing; we hated it when it was called Find Fast as inflicted by MS Office, and I don't expect we'll like it any more if it's embedded as an OS or post-FATxx file system "feature" either. And if that indexing service autoruns dropped malware via some exploit, we will hate it all the more.

Less wastage of screen space!

A program isn't easier to use just because the dummy buttons are 200 pixels high. Screen area is a performance resource, just like RAM or HD; please don't squander it on rubbish! We don't buy big monitors and run them just to have everything take up as much space as they did a few years ago at 640 x 480; we either want to see more stuff, or we have to run things larger (low res on big glass) because our eyesight is the limiting factor.

Less functionality!

Testing has to start at the projectorware phase of development, like this:

Dev1: "We need to enrich email with bold, italics, colors, funny fonts..."
Dev2: "Hey, HTML does all that! Just pass it over to MSHTML.DLL"
Tester: "Are you mad? HTML also autoruns scripts and active content!"

Guess which team member missed that presentation...

Today's fun link:


Today's fondly-remembered band: Frankie Goes To Hollywood. A biopsy to Google on:

"...put them outside, but remember to tag them first for identification purposes"