21 July 2006

Keylogger vs. Keylogger Blocker?

I followed up a bit on this Simon Scatt entity here:


Check out the "comments" to that blog post; it seems that the company he's/it's punting makes both keylogging and keylogger-blocking software. I wonder which wins?

The "Windows Constitution"

I'm very happy to see this:


Like a constitution, it provides a yardstick by which behavior can be judged. If problems arise in the future, one could link complaints to this statement as a way of highlighting any divergance from Microsoft's stated intentions.

Users Know Less Than You Think

I like to find big meta-truths that span platforms, and here's one:
No matter how little you expect users to understand about your product, they will understand even less
I could replace "understand" with "know" or "care", for that matter.

This has been fairly obvious when it comes to end users and software authors; cue horror stories of floppies stapled to letters or copied onto A4 paper, and old jokes about cup-holders and power outages.

It's less obvious, but I suspect equally true, whenever one programmer's code is used by another - either as peers co-coding a project, or one software vendor using code objects (or APIs) created by another software vendor. Those cases also involve a "user" (the coder using the API or object) and "producer" (the author of the API or object).

For example, after spending months developing an ActiveX control for use by other programmers, you may think it reasonable to expect them to read your ReadMe.txt that contains caveats such as "parameter values must be in range". But someone who is using hundreds of such re-useable code objects in a project may assume how they work without reading any of those ReadMe.txt files.

A good test of acceptable expectations is: "What if everyone did what I'm about to do?"
This is also a good bulwark against badly-behaved software. What if all installed applications:
  • Required admin rights to run?
  • Kept pestering the user to "register"?
  • Added themselves to the top of the Start Menu?
  • Added themselves to the startup axis to "fast start"?
  • Added their own ad-hoc systems to pull down updates?
  • Added their own underfootware content indexing system?
  • Patched into the shell to process file content whenever files are listed?
  • Smashed file associations to just one "open" action for their own application?
Multiple non-default actions per file type were added in Windows 95, yet most applications behave as if we still lived in the "ug, see file; ook, open with rock" age of Windows 3.1, and this reveals another truth about software vendors; they are likely to be as self-serving as the most rapacious malware authors, wherever they can get away with it.

When "Search" Finds Trouble

Once a bit of grey chit-chat is done, this post will lightly consider some "Social Engineering" risks of HTML and search.

This blog gets updated slightly more often than my web site, which says more about the web site than this blog! Readers used to less than one post a month may wonder about my relative bloggorrhea of late; I guess it's catch-up time, and there's more to talk about. I often find I have not enough time to go through the newsgroups, but enough time to post a blog or start on a web page, and now that is what I'll do.

Often long blog silences are because I've been (far) away from keyboard, as I'm blessed with reasons to travel combined with an ongoing enjoyment of doing so. I'd love to tell you about some excellent news in Vista, but I still need to pin down what/how I can tell you and what is still NDA.

One thing I can tell you, is that there are 200+ fake anti-spyware programs out there, and one of these is likely to be what my recent dogged commenter is pushing:

Simon Scatt said...

Many programms include spyware modules. Use anti-spyware for protect your privacy. As for me, I like professional anti-spy software like PrivacyKeyboard by Raytown Corporation LLC. You can download it here (URL snipped)

The thing is, "Simon Scatt" posts exactly the same comment to every post I make, no matter what that post is about - which smells like a bot. A combination of tech skills required to bot past the OCR challenge, plus the ethical dubiousness to actually do so, bodes poorly for the safety of whatever they are trying to push at you. Just Say No, and don't click that link!

Speaking of links clicked, I got a fright the last time I fired up this blog at http://quirke.blogspot.com to edit it. I thought "uh-oh, it's finally happened..." until I realised the link I'd entered should have been http://cquirke.blogspot.com

HTML being what it is, I could quite easily show you http://cquirke.blogspot.com as a link, which is reason enough to consider HTML unfit for use as a generic "rich text" medium between arbitrary (untrusted) entities. Retro-fitting anti-phishing logic to web browsers is an appropriate way to run after the horse after it's bolted from the stables, because web browsers have to live and breathe HTML. But a horse has no place in the living-room, and using HTML throughout the system as generic "rich text" (e.g. for email message "text" and elsewhere) has exactly that effect.

A bigger risk is that folks rarely type explicit URLs anymore; they either re-use links like the ones above, or they increasingly search rather than link. I wanted to link my text "200+ fake anti-spyware programs" to the CastleCops article that raised this issue, but as I didn't keep the link, I tried to search for it instead. I found something else I used that is a bit more topical, but the same search results could just as easily lead me to click something that bites.

Microsoft's been in love with search since MS Office started pushing Find Fast. A search for "Find Fast" is revealing; first comes an unrelated bit of foistware, then comes a flood of "how do I get rid if this thing?" links, starting with one from Microsoft themselves. Yet with each new version of MS Office, Find Fast has been more difficult to get rid of, and XP has the same thing built into the OS. Now that "Google envy" is kicking in, search is likely to pervade Vista's UI.

I do see some logic in this, in that the newest computers may better carry the overhead of search indexing, and Microsoft has leveraged deep new OS features (i.e. beyond the efficiencies of NTFS) in Vista to minimize this impact. We may well find that once we use it, the expected adverse impact isn't as bad as we'd expect and we may choose to live with it.

But performance impact is only one objection to dumbing down computer use from folder navigation to guessing at names or content. More worrying are the safety implications - an opportunity is created for incoming files to do what that top link in the "Find Fast" search does; thrust something inappropriate (and probably dangerous) into your face instead of what you wanted or expected.

20 July 2006

Security End-Users Can Trust

Here, I'm referring purely to the mechanics of how a user can believe what is on the screen, have faith that passwords can't be cracked, and so on. They say that "justice must not only be done, but must be seen to be done"; by the same token, security must be seen to done, or we are asking users to place blind faith in the good will and competence of those who the user is obliged to trust.

The core problem is that humans are weaker than computers when it comes to the amount of pure and arbitrary data they can perceive and remember.


No matter how tightly-coded the security validation logic, and how strong the key strength, what the user will eventually see (and typically, pay cursory attention to) will be a bunch of pixels on the screen. Anything that can fake those pixels, will get trusted.

We know that to prevent an attacker brute-forcing or forging something, there has to be a minimum amount of information present. We like passwords to be randomized across a minimum number of bits, and we provide hard-to-copy cues in forgery-resistent material. So we have foil strips and watermarks in bank notes, hard-to-manufacture copper-on-aluminium software installation CD-ROMs, and so on.

When it comes to displaying something in a forgery-resistent manner, we are restricted to pixels that have 16 million possible color values, of which users may distinguish 100 or so at best. The entire screen area may be as low as 640 x 480 pixels. Anything can set any pixel to any color, so there's no way to prevent forgery.

Even if they were, humans cannot perceive and appreciate arbitrary pixel patterns. The brain will derive patterns from the raw data and the mind will evaluate these patterns. The raw data itself will not be fully "seen"; only a limited number of derived patterns.


A large number of bits is the best-case strength for a key, applicable only if possible values are randomized over the key space, and if "cribs" (encrypted information for which the plain-text can be guessd) are not available. WEP failed both of these criteria; key strength was devalued by OEMs who left several bits in the key to known default values, and WEP traffic included a lot of stereotypical packets that provide "cribs".

Humans usually don't remember raw data; instead, they remember algorithms that can create this data. This skews values within the key space from a truly random spread, to preferred values that match the way humans think, and thus weakens the key strength.

So if it's easy to remember, it's easy to guess. If it's not easy to remember, then the user will write it down (or worse, enter it into a file on the system) and your strong password system becomes a weak and unmanaged token system. If you're going to use a token system anyway, then it's better to do this properly (e.g. biometrics, USB fobs, etc.).

User-managed passwords may be acceptable if you just want the semblance of due dilligance. You can point to your password policy, shrug about bad workers who break the policy, and seek a scapegoat whenever things go wrong. But once something that is essential to make things work is also disallowed, you lose management control, and examples of that abound.

Multiple Targets

Most assessments of key strength against brute-force or weighted-guess attacks assume that only one particular system is being targeted. The odds change considerably if you don't care what target you penetrate, and have millions of targets to choose from.

Instead of having to back off after 10 attempts due to some sort of password failure lock-out, you can simply make 9 attempts on a few thousand systems every hour or so. Eventually, you'll break into something, somewhere, and all stolen money is equally good.

Obviously, consumer ecommerce on the Internet presents this opportunity for a one-to-many relationship between attacker and victim. Slightly less obviously, it also facilitates a many-to-many relationship (the bane of database design) when the attacker can use multiple arbitrary malware-infected PCs as zombies from which to launch the attacks.

12 July 2006

Repairing XP's Firewall

This is another example of what happens when you break the "Safe should be boilerplate" rule (see the previous two posts).

Windows XP has a built-in firewall that is quite effective at keeping intruders out, but does little to prevent malware already in the system from calling home. This is in keeping with a current weakness in Microsoft's approach to Windows and malware - an almost total disregard for the need to reclaim PCs from the clutches of malware infection.

Once malware is active, it can take action against your defenses and tools, including XP's built-in firewall. This is as easy as attacking Safe Mode, and for the same reason; the firewall depends on registry settings that are easy to attack once you have admin-level access to the registry.

There's a good article on this situation here:


The previous post in this blog describes how to fix damaged Safeboot registry information; you can use similar tactics to fix the SharedAccess information that defines the firewall state, or you can use the sharedaccess.reg as linked from Ramesh's article mentioned above.

Repairing Safe Mode (Safeboot)

Here's an example of what happens when you break the "Safe must be boilerplate" rule.

Many folks rely on Safe Mode to tackle active malware, on the basis that malware is less likely to be running if much of the startup axis is avoided when Windows starts up. But Safe Mode is defined in the registry, so anything that gets to run in XP can kill it off - a risk that's always been there, and one that I've highlighted in private forums often enough.

Now that malware is doing what I'd predicted, there's a need to repair the damage when encoutering it in the field. This blogger's article...


...describes three ways to do this, but these methods involve running Windows to do so. You may not want to do that, if the plan was to first do malware scans and cleanup in Safe Mode Command Only before allowing ?infected Windows to run.

If you are using Bart PE CDR boot as your initial-contact malware cleanup platform, then you can repair Safe Mode, the XP firewall, and any other registry settings damage in the following generic way; by harvesting settings from previous registry states and merging these into the current registry, before you try to boot the damaged hard drive XP installation.

Understanding Bart registry access

Bart PE is a free utility that builds a bootable subset of XP, from which one can launch many tools written for Windows. A problem with running such tools from a Bart PE CDR boot is that the registry they see will be that of the Bart boot, and not the hard drive installation you are trying to maintain.

Bart integrates tools as "plugins", using .INF-based wrappers that serve to "install" the tool at the time the bootable CDR is compiled. One such plugin is RunScanner, which facilitates transparent access to the inactive registry hives on the hard drive as if they were in effect.

RunScanner patches into the process it's running and redirects all registry calls to treat the designated hives on the hard drive as if they were the active registry. Command line parameters for RunScanner control whether there is to be a delay before this kicks in (so that the program can initialize through the Bart registry first), which hives are to be used, and so on.

Child processes are generally not affected, and that can complicate the use of tools that spawn processes which access the registry, e.g. Nirsoft's RegScanner.

Once you combine RunScanner with Regedit (as a standard Bart build may do automatically), you are in a position to fix registry problems as if you were running the stricken installation - without the risk of actually running that installation!

Binding arbitrary registry hives via Regedit

XP's Regedit allows you to bind arbitrary hive files to HKEY_LOCAL_MACHINE as if they were part of the registry. The hives won't generally be used by the system, but it makes it a lot easier to browse them and export things you'd be interested in.

In Regedit, you'd highlight HKEY_LOCAL_MACHINE and then go File menu, Load Hive - an option that is greyed out as unavailable if anything other than HKLM is selected - and then browse for a hive to bind. You will be prompted for a name to use, and the hive will appear as an extra subtree under HKLM using that name.

You can then browse the added material and export parts as .REG files in the usual way (tips; select "Win9x/NT4 Registration Files" in the type drop-down if you want to save as 8-bit ANSI for easier editing outside XP, and force a .TXT extension to reduce the risk of inadvertent import).

To prepare this material for import into the active registry, you'd have to search-and-replace the name you used when binding the hive to the correct name for the active form. It helps if you use a unique name when binding the hive, to reduce the risk of replacing the wrong stuff!

Finding backup copies of registry hives

The XP registry hive files fall into two types; system and per-user. You won't see them unless your shell is set to show all files and contents of system locations; also, you should not hide file name extensions otherwise you can't tell which are hives and which are .LOGs, etc.

The active system hive files are located in your System32\Config and have file names that have no extension at all, and the names are SYSTEM, SECURITY, SAM, SOFTWARE and DEFAULT.

The active per-user hive files are stored in the base of each user account subtree in "C:\Documents and Settings", plus there is the system user hive in your System32\Config\systemprofile directory. The file name is NTUSER.DAT

Like Win9x, backups of the initial system registry created when the OS is installed is kept as a last-resort fallback. These are held in the Repair directory within your Windows base directory with the same names as the active forms, though for some reason they may appear to have a .BAK extension when seen via Regedit's "load hive" dialog. Similar baseline backup hives are kept in System32\Config as .SAV files, but these may contain less content.

Unlike Win9x, XP does not automatically maintain a set of fresh registry backups. The "last known good" backup merely consists of part of one hive, stored within the same hive file; anything that corrupts the file will thus likely kill the internal backup.

I recommend setting up ERUNT as an overnight weekday Task to create such backups, keeping one for each day of the week by using the relevant ERUNT command line parameters in each Task. If you have ERUNT running in this way, you will have those backups to use, in addition to the ones I'll describe in a little while.

If running, the System Restore process creates fresh registry backups as part of each restore point. This is one reason why it's best not to purge System Restore, even though infected restore points will re-infect a clean system if they are restored. The file names are modified but obvious, and can be found in "C:\System Volume Information\_restore{**}\RP???\snapshot", where ** is an identifier unique to the particular XP installation, and ??? is the number of the restore point. It is by using an installation identifier that XP's SR avoids one installation's SR data overwriting another, as happens with Windows ME's \_RESTORE subtrees.

Note that the System Restore data is the only place where you will find backup copies of the per-user registry hives - an even more astonishing XP fagility than the lack of an independent automatic file-level hive backup facility!

Fixing the Safeboot registry subtree

Safe Mode is defined in HKLM\SYSTEM\CurrentControlSet\Control\Safeboot, where "CurrentControlSet" is a pointer to one of the ControlSet001, ControlSet002 etc. subtrees. When seen via Bart boot, RunScanner and Regedit, no control set is active, so you won't see any CurrentControlSet. In any case, you should operate on each explicit control set rather than just the current one!

The best previous registry hive from which to harvest Safeboot will usually be the SYSTEM (may be seen as System.bak via Regedit) in the Repair directory. Backups of SYSTEM in the RP???\snapshot directories may be more up to date, but if these date from after malware went active, they may be equally damaged or as malicious as what you are trying to fix.

Bind the hive file into Regedit as described previously; I'd use a nice unique name such as "!!ABCXYZ!!" when prompted. Then browse into the ControlSet, highlight Safeboot and go File menu, Export. I'd save as a "Win9x/NT Registration File" with the file name in quotes and using the .TXT extension; then I'd edit the file in Notepad or similar and replace all \!!ABCXYZ!!\ with \SYSTEM\ and save that. By importing that file via Regedit, I'd merge in the Safeboot to the corresponding ControlSet - to do all ControlSets, I'd edit the file accordingly before importing it.

Disclaimer: This stuff involves direct editing of the registry via Regedit using redirection tools from a free 3rd-party maintenance platform. Be careful, keep backups of everything you change, and eyeball to ensure you are in fact seeing the correct registry!

10 July 2006

"Safe" Should Be Boilerplate

You can think of "safe should be boilerplate" as a rule to avoid a basic conceptual error that leads to bugs and exploits.

Now that the shoe has dropped (ITW malware is killing Safe Mode by deleting the registry content that defines it), I can be a bit more public about this concept - that if something is to be "safe", it can't be defined by editable baseline data. Examples:
  • Web browser "blank" page
  • Safe Mode startup axis
Safe Mode was a bit safer in Win9x, because there was no startup axis; the whole idea was to load no non-core drivers and run no startup axis integrations at all (this was true as long as the "* trick" wasn't applied).

But XP's Safe Mode is flawed in several ways that create opportunities for malware:
  • Entries can be added to (or persisted into) its startup axis
  • It uses a different user account, therefore different per-account settings
  • It runs a screensaver, which can be re-defined
  • File associations now allow per-user overlay
  • The "Cmd Prompt Only" shell can be re-defined
  • The whole thing depends on a re-definable registry subtree
Some malware now destroys XP's Safe Mode by deleting the registry content that defines it. I found this blog link that describes the problem and offers solutions:


I have a case like this at the moment, and will be trying a "case 4" approach as I described as a comment to that blog entry. If it works, and I can remember the exact method I use, I may write that up as a new blog entry here :-)

A less-obvious example of the "Safe should be boilerplate" rule is the option not to use a password. Normally that's done as a "blank password", rather than a true boilerplate absence of a password - and that becomes absurd when coupled with the usual "to set a new password, first enter the current password".

The trouble with the "Safe should be boilerplate" rule is that it precludes any fix-it-later patching. You have to make your boilerplate perfect, even if that means simplifying your code towards triviality in order to approach that perfection!