I've used a few on-demand antivirus scanners and scanners for commercial malware (usually known as "anti-spyware") and generally they're just not designed for troubleshooting environments such as Safe Mode and Bart CDR boot.
Fancy display mode required
Common advice is to use these scanners from Safe Mode, where screen resolution is usually low (say, 640 x 480) and color depth is low, too (typically 16 colors).
A Squared is almost unusable in low res, because the dialog boxes ASSume you have at least 800 x 600 to play with - often the UI controls are below the edge of the display when in Safe Mode, so you have to guess the number of times to press Tab in order to keyboard the "go" button. The need for this high resolution has nothing to do with the amount of content that needs to be displayed on the screen, and everything to do with wasteful eye-candy UI design.
AdAware delights in using subtle colors that turn to stippled mud in Safe Mode's low color depth, and some needed UI cues (e.g. which UI control is selected) vanish completely.
Mouse required
Both AdAware and Spybot border on the unusable when a mouse is not present, as may be the case in troubleshooting conditions. Freshly-installed Spybot starts with a set of "wizard" dialogs that defy attempts to switch focus from the keyboard, and AdAware's keyboard navigation is highly ambiguous at best.
Installation required
The free BitDefender 8 on-demand scanner and MS Antispyware (now Windows Defender) both require Windows Installer to install, and that service is not present in Safe Mode. In order to use these tools, you first have to run normal Windows - so that the malware you are after is almost certain to be active and well-positioned to interfere with the installation and use of the scanners.
I haven't yet got the above tools, or AVG Antispyware (ex-Ewido), to run from a Bart CDR boot. Trend SysClean, A Squared, AdAware and Spybot are better there, with Spybot claiming the ability to scan relative to the inactive hard drive registry hives without needing RunScanner redirection. In practice, I find Spybot detects less when run from a Bart CDR boot than when it is run from Safe Mode.
31 December 2006
21 December 2006
Vista vs. email
This blog post was interesting:
http://windowsvistablog.com/blogs/windowsvista/archive/2006/12/19/windows-vista-and-protection-from-malware.aspx#comments
It's an interesting expectation, that Vista would magically be immune to malware attacks - but that expectation is taken seriously in this post, which views the problem through the eyes of the totally inexperienced user. By blocking access to all incoming attachments, Vista's native Windows Mail is able to foil 8 of the 10 common attacks tested - the ones that got through, did so by using file types that some email applications don't block.
My expectations are far more modest:
For many (most?) users, blocking all attachments is too broad a sword to live with. What these users expect, is to look at an email message and attachment link, and assess whether the attachment is safe to "open". That in turn requires information about the attachment file type that is easy to understand (as a large number of raw .ext is not) and can be relied upon (in contrast to Vista's default "open based on hidden info rather than visible .ext" behavior).
Windows has been designed with many things in mind, but type discipline is not one of them. There's been great stress on per-user rights in NT, in keeping with the needs of corporate IT, but this maps poorly to consumer needs. The code/data distinction has been undermined, and the unrealistic objective of "you can do everything without having to know anything" assumes that consumers won't have the skills to assess and act upon file type risk information.
The last point, "should malware go active, user should be able to clean it", is a topic in itself, which goes about safety awarenss that stretches from maintenance OS through "Safe Mode" and into safe handling for suspect locations, such as newly-discovered drives or subtrees that are designated as holding risky material, much as "My Documents" is designated as holding "user data".
Here are a couple of unrelated quick things...
Screening spam
Another thing I'd like to see in an email application is better filtering, based on criteria other than various text matches. Specifically, I'd like to filter out "messages" that have under 100 characters of visible message text plus embedded (or remote) images. This is emerging as a common form of spam, with two effects; firstly, there's no text to filter/match, and secondly, the entire "message text" can be one huge clickable surface.
Firefox's killer feature
Spell checking within text edit fields - a must-have, in an age of online text composition e.g. blogging, forum posts, comments and web mail!
Up until now, Microsoft has positioned spell checking as part of MS Office, with the unique vendor advantage of integrating this application component into the OS (e.g. Outlook Express).
These happy days should be over, thanks to Firefox 2, just as free Google email killed the acceptability of the 1-2M email storage norm for paid-for ISP email "services".
http://windowsvistablog.com/blogs/windowsvista/archive/2006/12/19/windows-vista-and-protection-from-malware.aspx#comments
It's an interesting expectation, that Vista would magically be immune to malware attacks - but that expectation is taken seriously in this post, which views the problem through the eyes of the totally inexperienced user. By blocking access to all incoming attachments, Vista's native Windows Mail is able to foil 8 of the 10 common attacks tested - the ones that got through, did so by using file types that some email applications don't block.
My expectations are far more modest:
- System should be immune from clickless attack
- User should receive accurate risk information
- System should act within the bounds of that risk information
- Should malware go active, user should be able to clean it
For many (most?) users, blocking all attachments is too broad a sword to live with. What these users expect, is to look at an email message and attachment link, and assess whether the attachment is safe to "open". That in turn requires information about the attachment file type that is easy to understand (as a large number of raw .ext is not) and can be relied upon (in contrast to Vista's default "open based on hidden info rather than visible .ext" behavior).
Windows has been designed with many things in mind, but type discipline is not one of them. There's been great stress on per-user rights in NT, in keeping with the needs of corporate IT, but this maps poorly to consumer needs. The code/data distinction has been undermined, and the unrealistic objective of "you can do everything without having to know anything" assumes that consumers won't have the skills to assess and act upon file type risk information.
The last point, "should malware go active, user should be able to clean it", is a topic in itself, which goes about safety awarenss that stretches from maintenance OS through "Safe Mode" and into safe handling for suspect locations, such as newly-discovered drives or subtrees that are designated as holding risky material, much as "My Documents" is designated as holding "user data".
Here are a couple of unrelated quick things...
Screening spam
Another thing I'd like to see in an email application is better filtering, based on criteria other than various text matches. Specifically, I'd like to filter out "messages" that have under 100 characters of visible message text plus embedded (or remote) images. This is emerging as a common form of spam, with two effects; firstly, there's no text to filter/match, and secondly, the entire "message text" can be one huge clickable surface.
Firefox's killer feature
Spell checking within text edit fields - a must-have, in an age of online text composition e.g. blogging, forum posts, comments and web mail!
Up until now, Microsoft has positioned spell checking as part of MS Office, with the unique vendor advantage of integrating this application component into the OS (e.g. Outlook Express).
These happy days should be over, thanks to Firefox 2, just as free Google email killed the acceptability of the 1-2M email storage norm for paid-for ISP email "services".
16 October 2006
Bart vs. BAD_POOL_CALLER
BAD_POOL_CALLER is one of those scary STOP errors that one may see in XP (that is, once you kill the duhfault "restart on system errors" setting) . This particular case was an XP SP2 system that was said to crash straight into this on startup.
Uneventful 12 hours in MemTest86 with no sponaneous lockups or booting of the Bart CDR I left in place during the test, motherboard caps OK. Bart CDR boot, HD Tune passes SMART, temperature and surface on both hard drives, file systems OK on ChkDsk all volumes.
Formal malware scans fine, until the first test that requires RunScanner to access the registry hives on the hard drive. As soon as I click RunScanner's dialog OK, the system STOPs.
Riiiight... next, I harvest spare registry hives from SR Restore Points in the C:\SVI subtree. Then I pick a trivial scanner that I've set to run via RunScanner; in this case, Stinger. It doesn't matter what it is; all I want to test-to-fix is whether I can initiate registry access to the hard drive hives. If that no longer dies, I may have fixed the problem.
I run Stinger in this way, each time choosing a different user account and not checking the "use all hives" checkbox in the RunScanner dialog box. Ever user account is fine except the one they actually use, which dies the blue death.
Now that I've narrowed it down to a single file, I rename away that user account's NTUSER.DAT (which is the per-user registry hive), copy in the most recent spare from the most recent Restore Point, rename that into action as NTUSER.DAT, and re-test; this time it works as well as the other accounts did.
I'm interested in a single hive causing this common head-scratching problem, so I keep the "bad" and "fixed" copies of the hive, which appear to be the same length. I'll FC them to see if there's some specific difference (either structural, or a recent install... tho I'd expect the latter to change the file length) that causes the problem, and update this article if that looks interesting.
Meantime, Bart has saved the day yet again; what could so easily been "just" wipe and re-install, turned out to be a few unattended hours on "the prelim" plus around one hour of interactive work in Bart. Did I mention I liked Bart?
That drill-down method again...
Uneventful 12 hours in MemTest86 with no sponaneous lockups or booting of the Bart CDR I left in place during the test, motherboard caps OK. Bart CDR boot, HD Tune passes SMART, temperature and surface on both hard drives, file systems OK on ChkDsk all volumes.
Formal malware scans fine, until the first test that requires RunScanner to access the registry hives on the hard drive. As soon as I click RunScanner's dialog OK, the system STOPs.
Riiiight... next, I harvest spare registry hives from SR Restore Points in the C:\SVI subtree. Then I pick a trivial scanner that I've set to run via RunScanner; in this case, Stinger. It doesn't matter what it is; all I want to test-to-fix is whether I can initiate registry access to the hard drive hives. If that no longer dies, I may have fixed the problem.
I run Stinger in this way, each time choosing a different user account and not checking the "use all hives" checkbox in the RunScanner dialog box. Ever user account is fine except the one they actually use, which dies the blue death.
Now that I've narrowed it down to a single file, I rename away that user account's NTUSER.DAT (which is the per-user registry hive), copy in the most recent spare from the most recent Restore Point, rename that into action as NTUSER.DAT, and re-test; this time it works as well as the other accounts did.
I'm interested in a single hive causing this common head-scratching problem, so I keep the "bad" and "fixed" copies of the hive, which appear to be the same length. I'll FC them to see if there's some specific difference (either structural, or a recent install... tho I'd expect the latter to change the file length) that causes the problem, and update this article if that looks interesting.
Meantime, Bart has saved the day yet again; what could so easily been "just" wipe and re-install, turned out to be a few unattended hours on "the prelim" plus around one hour of interactive work in Bart. Did I mention I liked Bart?
That drill-down method again...
- check RAM and hardware first
- Boot into Bart CDR
- use a tool via RunScanner
- choose one user account at a time
- if all break, suspect a system hive (common to all accounts)
- if only one account breaks, it's that account's hive
- preserve damaged hive by renaming away, not delete
- harvest replacement hives from recent Restore Points
- try with newset, then second-newest etc.
- do not try any of the above in hard-drive-booted Windows
- compare bad and good copies of the hive for differences
15 October 2006
Open Source Eudora
Most of the time you'd be reading rants about awful and shifty vendors are - perhaps every industry is as bad, but I'm "further away" from most? - so it's a pleasure to celebrate software vendors who do the right thing...
http://www.eudora.com/faq/
Now here's a vendor with a popular product, but one that isn't their central interest. They could have just killed it, and told those who complain that it is their right to do so; after all, we see that all the time with music corporations, who delete titles they are "too old and unpopular to make money" while still forbidding even the original artist to distribute them for free.
But instead, they are shifting the product into Open Source, while committing to honor their obligations to those who have purchased the Paid version. Those who use Sponsored mode (myself included) can now stay in this mode with full functionality forever, even when the ads stop. Unless there's some sting in the tail so hidden I can't see it, it looks like an excellent result!
Previous "Do The Right Thing" award
I've been as impressed a few times before, and the last time was when Computer Associates ceased the popular free InoculateIT antivirus suite. Again, they announced the move and then supported the free version with updates for a longer period than commercial vendors' one-year subscription, and they offered a low conversion price to the feeware eTrust that replaced it.
The InoculateIT story was particularly impressive, as the initial announcement that stated "free updates until we have to change the scanning engine code" was made within a few months of the release of a new version of Windows. It would have been so easy to claim the need to create a new engine to overcome compatibility issues with the new Windows version, but they didn't do so; InoculateIT remained free for many months thereafter.
When InoculateIT ceased to be free, it also ceased to be the de facto free/non-warez antivirus product. AVG stepped into those shoes; it was always around, along with Avast, later AntiVir, and some others, but there was greater confidence in InoculateIT at that time. AVG have also done the right thing when they dropped the free AVG 6 product to consolidate on AVG 7 as the sole code base; they offered a free version of AVG 7, pushing alerts to AVG 6 installations about the cut-off date for some months before updates ceased for the old version.
When you have such good "no strings attached" free antivirus products, why would anyone want to put up with Symantec's embedded commercial malware in Norton AV? If you do a Google( "Why I don't use Norton" ), you will see I'm not the only one who avoids it.
http://www.eudora.com/faq/
Now here's a vendor with a popular product, but one that isn't their central interest. They could have just killed it, and told those who complain that it is their right to do so; after all, we see that all the time with music corporations, who delete titles they are "too old and unpopular to make money" while still forbidding even the original artist to distribute them for free.
But instead, they are shifting the product into Open Source, while committing to honor their obligations to those who have purchased the Paid version. Those who use Sponsored mode (myself included) can now stay in this mode with full functionality forever, even when the ads stop. Unless there's some sting in the tail so hidden I can't see it, it looks like an excellent result!
Previous "Do The Right Thing" award
I've been as impressed a few times before, and the last time was when Computer Associates ceased the popular free InoculateIT antivirus suite. Again, they announced the move and then supported the free version with updates for a longer period than commercial vendors' one-year subscription, and they offered a low conversion price to the feeware eTrust that replaced it.
The InoculateIT story was particularly impressive, as the initial announcement that stated "free updates until we have to change the scanning engine code" was made within a few months of the release of a new version of Windows. It would have been so easy to claim the need to create a new engine to overcome compatibility issues with the new Windows version, but they didn't do so; InoculateIT remained free for many months thereafter.
When InoculateIT ceased to be free, it also ceased to be the de facto free/non-warez antivirus product. AVG stepped into those shoes; it was always around, along with Avast, later AntiVir, and some others, but there was greater confidence in InoculateIT at that time. AVG have also done the right thing when they dropped the free AVG 6 product to consolidate on AVG 7 as the sole code base; they offered a free version of AVG 7, pushing alerts to AVG 6 installations about the cut-off date for some months before updates ceased for the old version.
When you have such good "no strings attached" free antivirus products, why would anyone want to put up with Symantec's embedded commercial malware in Norton AV? If you do a Google( "Why I don't use Norton" ), you will see I'm not the only one who avoids it.
14 October 2006
Rungbu.A Exploits Bad Design
This case study illustrates several issues I've raised before, as well as a few lessons, such as "there's no 'one problem per case' rule", "best practice isn't bullet-proof" and "one antivirus scanner isn't enough".
I was on site doing something else, when I was called to check out a problem with opening Word documents, which the user attributed to an encounter with a dubious diskette.
The first thing I noticed was that her PC wasn't showing file name extensions, contrary to the way I generally set up PCs...
"Hey, you can't see the file name extensions! Without that, you don't know what type of file you're about to open! That's dangerous!"
'No, that's OK; I can see the Word icon, so I know the files are Word documents'
This was followed by an explanation of why this can't be trusted, while she insisted it was OK, and 'was always like that'. I pointed to two files as an example; a pale (normally hidden) one called "Some file name" and a bold one also called "Some file name". I right-clicked on each, and sure enough, the hidden one was the .DOC while the visible one was an .SCR - so I wasn't too surprised when the setting to not hide file name extensions would not "stick".
"You're malwared", I said, and after shutting down and setting CMOS to boot CD, I booted up the Bart CDRW I tend to have on me at all times. Bart would boot on this crusty old Win98SE system (333MHz, 64M RAM)... if only the 32-speed CD-ROM would read CDRW disks... so it's heigh-ho, back to base we go.
Who's stupid?
As a geek, my first reaction was to consider the user foolish for trusting icons as an indication of file type. Then I thought; why should a user know that the most dangerous file types can set whatever icon they like, and that .scr files are raw code, and thus dangerous? Why doesn't the user interface clearly flag which files are code and which are data, as well as the type, and disallow any content to misrepresent itself? Why are file name extensions hidden by duhfault, anyway, and why are things still as brain-dead in Vista?
That's the problem with bad design - it never gets patched, because it "works as designed". We had years of MS Office macro and VB malware before that was fixed, years of Outlook and Outlook Express auto-running scripts in HTML "message text", and we still have Format in the middle of hard drive context menus while Backup, Check for errors etc. are buried under Properties, Tools. Stupidity is found not only in end users.
Best Practice can still fail
As usual, I started work on the system with several hours in MemTest86, to make sure the system was safe to run at all. Then I booted my Bart CDR, eyeballed SMART details in HD Tune, did a surface scan of the 4G hard drive, and checked SMART details again; no change in SMART, no surface errors, OK. A ChkDsk confirmed the four file systems were OK, so I created a session base directory on one volume, and set that as Bart's Temp location. I could then shrink the Bart RAM disk as it's no longer needed for Temp, and create a pagefile on hard drive to relieve constraints imposed by 64M of RAM.
Then I started my antivirus scanning wizard and went about my other work. A while later, I see the second av scan is still stuck on the same file, so I run HD Tune again; it shows blank SMART details, and a surface scan picks up "one bad sector".
I immediately pull the mains, pull the hard drive into another PC, copy off everything from DOS mode using the LCopy from Odi's LFN Tools, starting with the data set and carrying on until most stuff is backed up. I had hard failures on C:\Windows (bad disk) and the session subtree to which the Bart av wizard would have been logging the scans (file system corruption).
Next, I went in with DiskEdit, confirming bad clusters throughout the entire C:\Windows directory chain. Noting the cluster address of the Windows directory, I searched for subdirectories on C: (fortunately it's a small C:, not the whole hard drive) and ballpointed the . cluster addresses for all that had .. pointing back to the lost Windows directory. Then I created scratch directory entries in C:\ to point to these, and copied them off.
I then did a raw image copy of the fortunately-small C: volume in case I needed to recover more stuff later, and finally back in DiskEdit, I "erase-marked" the Windows directory so that scanners traversing the file system wouldn't fall into a pit of bad sectors.
Having got what I could off the stricken hard drive, I put it back in the PC it came from and got back to my Bart antivirus scanners etc.
Rungbu.A
Four out of five of the initial "detect only" scans detected the same files as infected, but each called the virus something different. One called it a generic trojan, another called it SillyWorm, each with a high variant suffix. Only Sophos gave it the unique name of Rungbu.A, though their site only had a descriptive page for the Rungbu.B variant. The sixth scanner was set to kill, and did; thereafter there was nothing for the remaining scanners to detect.
Reading the decription's Advanced page revealed this malware to be anything but "generic". It left the system tattoo'd so that I had to Regedit before I could stop Windows from hiding file name extensions.
We get annoyed when vendors don't patch known exploitable surfaces, and highly irate when there are ITW (in the wild) malware already exploiting those surfaces. Yet we've seen so many malware with double file name extensions such as README.TXT.pif, and these raw code file types can and do set their icons to match the faked file type.
But hey, not a problem; it "works as designed".
Re-entry
Finding the plethora of vintage application disks for this PC would not be fun, so I decided to preserve the old installation instead. I called the site and asked them to find the disks if they could, in case I'd have to rebuild, then set out to fix the installation.
First, I partitioned a replacement hard drive (a used 40G, jumpered to act as a 32G in deference to the old PC's BIOS limitations) and copied everything to one of the logical volumes. Then I fresh-installed Windows 98, and copied that subtree to the logical volume as well. Next, I copied everything except the old Windows child subtrees into place, then finally identified and copied the recovered child subtrees over what was installed with Windows.
All of this was done from DOS mode, but I couldn't extract recovered registries etc. from the latest RB*.CAB from there, so I had to go back into Windows at this point. That crashed Explorer, so I set shell=Winfile.exe in System.ini, and from there I could Extract the registry files. Back to DOS mode to drop in these registry files, as well as older backed-up Vmm32.vxd and "Exit to DOS.pif", and now everything looks OK - though I'll try everything out in case there are needed files that are missing from the Windows base directory.
Duhfaults are forever
It's a good thing the variant of Rungbu that infected this PC didn't also put "hide hidden and system files" into effect, and that we don't use Microsoft's duhfault settings. If we did, there would be no visible indication the system was malware'd; we'd see only one "Some Name" file, which would appear to "open" just fine (the malware code runs invisibly and then spawns and opens the original Word document). Unless someone tried to change the Explorer settings, and became puzzled when the changes didn't "stick", there's be no indication that anything was wrong.
And that means the companion malware files would have found their way into every data backup, too.
It's all very well saying "it's only the default setting; you can change it", but defaults are forever. These unsasfe defaults are all you get in "Safe Mode", will recur after "just" formatting and re-installing Windows, will be the baseline for every newly-created user account, may be re-asserted by domain servers or when account rights are limited, and will be what users see whenever they use arbitrary PCs elsewhere. Defaults should always be truly safe!
I was on site doing something else, when I was called to check out a problem with opening Word documents, which the user attributed to an encounter with a dubious diskette.
The first thing I noticed was that her PC wasn't showing file name extensions, contrary to the way I generally set up PCs...
"Hey, you can't see the file name extensions! Without that, you don't know what type of file you're about to open! That's dangerous!"
'No, that's OK; I can see the Word icon, so I know the files are Word documents'
This was followed by an explanation of why this can't be trusted, while she insisted it was OK, and 'was always like that'. I pointed to two files as an example; a pale (normally hidden) one called "Some file name" and a bold one also called "Some file name". I right-clicked on each, and sure enough, the hidden one was the .DOC while the visible one was an .SCR - so I wasn't too surprised when the setting to not hide file name extensions would not "stick".
"You're malwared", I said, and after shutting down and setting CMOS to boot CD, I booted up the Bart CDRW I tend to have on me at all times. Bart would boot on this crusty old Win98SE system (333MHz, 64M RAM)... if only the 32-speed CD-ROM would read CDRW disks... so it's heigh-ho, back to base we go.
Who's stupid?
As a geek, my first reaction was to consider the user foolish for trusting icons as an indication of file type. Then I thought; why should a user know that the most dangerous file types can set whatever icon they like, and that .scr files are raw code, and thus dangerous? Why doesn't the user interface clearly flag which files are code and which are data, as well as the type, and disallow any content to misrepresent itself? Why are file name extensions hidden by duhfault, anyway, and why are things still as brain-dead in Vista?
That's the problem with bad design - it never gets patched, because it "works as designed". We had years of MS Office macro and VB malware before that was fixed, years of Outlook and Outlook Express auto-running scripts in HTML "message text", and we still have Format in the middle of hard drive context menus while Backup, Check for errors etc. are buried under Properties, Tools. Stupidity is found not only in end users.
Best Practice can still fail
As usual, I started work on the system with several hours in MemTest86, to make sure the system was safe to run at all. Then I booted my Bart CDR, eyeballed SMART details in HD Tune, did a surface scan of the 4G hard drive, and checked SMART details again; no change in SMART, no surface errors, OK. A ChkDsk confirmed the four file systems were OK, so I created a session base directory on one volume, and set that as Bart's Temp location. I could then shrink the Bart RAM disk as it's no longer needed for Temp, and create a pagefile on hard drive to relieve constraints imposed by 64M of RAM.
Then I started my antivirus scanning wizard and went about my other work. A while later, I see the second av scan is still stuck on the same file, so I run HD Tune again; it shows blank SMART details, and a surface scan picks up "one bad sector".
I immediately pull the mains, pull the hard drive into another PC, copy off everything from DOS mode using the LCopy from Odi's LFN Tools, starting with the data set and carrying on until most stuff is backed up. I had hard failures on C:\Windows (bad disk) and the session subtree to which the Bart av wizard would have been logging the scans (file system corruption).
Next, I went in with DiskEdit, confirming bad clusters throughout the entire C:\Windows directory chain. Noting the cluster address of the Windows directory, I searched for subdirectories on C: (fortunately it's a small C:, not the whole hard drive) and ballpointed the . cluster addresses for all that had .. pointing back to the lost Windows directory. Then I created scratch directory entries in C:\ to point to these, and copied them off.
I then did a raw image copy of the fortunately-small C: volume in case I needed to recover more stuff later, and finally back in DiskEdit, I "erase-marked" the Windows directory so that scanners traversing the file system wouldn't fall into a pit of bad sectors.
Having got what I could off the stricken hard drive, I put it back in the PC it came from and got back to my Bart antivirus scanners etc.
Rungbu.A
Four out of five of the initial "detect only" scans detected the same files as infected, but each called the virus something different. One called it a generic trojan, another called it SillyWorm, each with a high variant suffix. Only Sophos gave it the unique name of Rungbu.A, though their site only had a descriptive page for the Rungbu.B variant. The sixth scanner was set to kill, and did; thereafter there was nothing for the remaining scanners to detect.
Reading the decription's Advanced page revealed this malware to be anything but "generic". It left the system tattoo'd so that I had to Regedit before I could stop Windows from hiding file name extensions.
We get annoyed when vendors don't patch known exploitable surfaces, and highly irate when there are ITW (in the wild) malware already exploiting those surfaces. Yet we've seen so many malware with double file name extensions such as README.TXT.pif, and these raw code file types can and do set their icons to match the faked file type.
But hey, not a problem; it "works as designed".
Re-entry
Finding the plethora of vintage application disks for this PC would not be fun, so I decided to preserve the old installation instead. I called the site and asked them to find the disks if they could, in case I'd have to rebuild, then set out to fix the installation.
First, I partitioned a replacement hard drive (a used 40G, jumpered to act as a 32G in deference to the old PC's BIOS limitations) and copied everything to one of the logical volumes. Then I fresh-installed Windows 98, and copied that subtree to the logical volume as well. Next, I copied everything except the old Windows child subtrees into place, then finally identified and copied the recovered child subtrees over what was installed with Windows.
All of this was done from DOS mode, but I couldn't extract recovered registries etc. from the latest RB*.CAB from there, so I had to go back into Windows at this point. That crashed Explorer, so I set shell=Winfile.exe in System.ini, and from there I could Extract the registry files. Back to DOS mode to drop in these registry files, as well as older backed-up Vmm32.vxd and "Exit to DOS.pif", and now everything looks OK - though I'll try everything out in case there are needed files that are missing from the Windows base directory.
Duhfaults are forever
It's a good thing the variant of Rungbu that infected this PC didn't also put "hide hidden and system files" into effect, and that we don't use Microsoft's duhfault settings. If we did, there would be no visible indication the system was malware'd; we'd see only one "Some Name" file, which would appear to "open" just fine (the malware code runs invisibly and then spawns and opens the original Word document). Unless someone tried to change the Explorer settings, and became puzzled when the changes didn't "stick", there's be no indication that anything was wrong.
And that means the companion malware files would have found their way into every data backup, too.
It's all very well saying "it's only the default setting; you can change it", but defaults are forever. These unsasfe defaults are all you get in "Safe Mode", will recur after "just" formatting and re-installing Windows, will be the baseline for every newly-created user account, may be re-asserted by domain servers or when account rights are limited, and will be what users see whenever they use arbitrary PCs elsewhere. Defaults should always be truly safe!
02 October 2006
Vitsa's Maintenance OS
This is one of the best bits of news I had from the Vista Labs a few months back! We were told to spread the word about what wasn't NDA, but this item was NDA at the time, so I had to sit on it. But today' Google( Vista boot DVD WinPE ) shows it's public knowledge now :-)
http://www.apcstart.com/site/jbannan/2006/08/1082/windows-pe-20-a-tiny-version-of-windows-for-system-maintenance
This turns what would have been a crisis (would Bart CDR boot be compatible with Vista's NTFS, registry, etc.?) into what may be a reason for cautious consumers to favor Vista over XP.
I tested Vista beta 2 a while ago in a bit more depth than recent time permits with the newer Customer Preview build I have now, and it's certainly come a long way since that earlier build. I specifically wanted to test the mOS, and that turned out to be very interesting indeed...
Those familiar with Bart PE would guess what I'd be looking for first - can it boot off a USB stick? Can you hot-swap USB flash drives? Can you use the optical drive or will the system crash if you eject the Vista boot DVD? Is there a GUI?
No, there's no GUI - it's more like Safe Mode Cmd Only, which is a good thing in many ways. I'd have been worried in Explorer was there as the shell, in case Vista's richer shell offered exploit surfaces to malware on the maintained system.
Yes, you can eject the boot DVD! In this recent build I tested, the Vista installation DVD is the mOS boot disk, and just as you'd UI your way to Recovery Console after booting an XP CD, so it is that you GUI your way to "command prompt", which is likely to be WinPE 2.0 itself.
After booting, the DVD gets a different drive letter, compared to the booted OS files. The free space and a few other cursory tests indicated these were different volumes, and neither is an alias of hard drive space. You can eject the Vista DVD, insert other CDRs or DVDRs, and use them directly. I suspect the mOS runs from a RAM drive - and it worked quite happily in 512M RAM. What I didn't check was whether it uses a page file on the HD.
Vista development takes off from the most recent Server 2003 SP1 code base - and this is a good code base for a mOS, because it no longer resets the USB during the boot process, as XP SP2 does. So the odds are favorable for booting Vista mOS off USB flash drives, etc.
Unlike a Bart boot, Vista mOS will "see" USB flash drives inserted and changed on the fly - they don't have to be present at boot time, as they do with Bart, and swapping them is OK too. (Tip for Bart users; a memory card reader present at boot will generally allow hot-swapping of cards after boot - so I share SD cards between Bart sessions and my camera, instead of using slower and more write-limited flash drives).
Running tools from Bart CDR
Just for laffs, I ejected the Vista DVD and popped in my Bart CDR. The nu2menu (the standard "Start buttom" menu shell for Bart) worked fine, and many of the tools worked too. Because the Bart drive letter is not the same as the boot drive letter, my own "Is this booted or Autorun?" batch file logic, e.g....
Set Prog=Ad-Aware.exe
Set Launch=%~dp0..\RunScanner\RunScanner.exe
Set Opt=/m /t 0
...
If "%SystemDrive%"=="%~d0" (
Start %Launch% %Opt% %~dp0%Prog%
) Else (
Start %~dp0%Prog%
)
...concludes Bart is being run from the "native" system OS, and not as the booted OS.
That means my tools weren't running through RunScanner, which is probably prudent at this stage. Yes, that means registry-orientated tools such as AdAware or HiJackThis will not "see" the HD installation's registry, but until we know RunScanner and legacy registry access methods are compatible with Vista's registry, it's safer this way.
Many tools didn't work, because they relied on files and settings within the running OS. The Bart plugins for these tools would have included these in the Bart mOS, but that's not the OS that's in effect here - so if these resources aren't in the standard Vista code set, then the tools won't work. That's to be expected; after all, if I'd just scraped them onto Bart without using the plugin system, they wouldn't have worked there either.
All this testing was with the original Vista DVD - I haven't gone as far as building a new Vista mOS boot disk, nor have I explored "plugging in" tools as one does for Bart. I'm not sure if either of these things would be possible, or whether the answers would change between the build I tested and the final release.
Ah, for the time to really explore this stuff!
Conclusions
It's really good news to see a mOS for Vista, even if it's still not really orientated to mOS work. For example, it won't operate unless there's a visible Vista installation on the hard drive, and the RAM testing component writes to and then boots from the hard drive installation - both of which are bad practices when dealing with systems that are really ill.
I think this is because the mOS is still rooted in its origins as a "(p)re-install environment", originally intended for use on perfect fresh hardware. It was somewhat in response to this, as well as seeing some Bart off-shoots that also break some mOS best-practices, that prompted an earlier "How to design a maintenance OS" post in this blog.
The important things is that it's there, on the installation DVD (a break-through, if you'd ever peered longingly at MS WinPE though the previous licensing sphincter) and that the architecture seems fundamentally sound. It needs to be tested more rigorously to see how well it stays within the rules of mOS best practice, but it's already more than I'd dared hope for!
http://www.apcstart.com/site/jbannan/2006/08/1082/windows-pe-20-a-tiny-version-of-windows-for-system-maintenance
This turns what would have been a crisis (would Bart CDR boot be compatible with Vista's NTFS, registry, etc.?) into what may be a reason for cautious consumers to favor Vista over XP.
I tested Vista beta 2 a while ago in a bit more depth than recent time permits with the newer Customer Preview build I have now, and it's certainly come a long way since that earlier build. I specifically wanted to test the mOS, and that turned out to be very interesting indeed...
Those familiar with Bart PE would guess what I'd be looking for first - can it boot off a USB stick? Can you hot-swap USB flash drives? Can you use the optical drive or will the system crash if you eject the Vista boot DVD? Is there a GUI?
No, there's no GUI - it's more like Safe Mode Cmd Only, which is a good thing in many ways. I'd have been worried in Explorer was there as the shell, in case Vista's richer shell offered exploit surfaces to malware on the maintained system.
Yes, you can eject the boot DVD! In this recent build I tested, the Vista installation DVD is the mOS boot disk, and just as you'd UI your way to Recovery Console after booting an XP CD, so it is that you GUI your way to "command prompt", which is likely to be WinPE 2.0 itself.
After booting, the DVD gets a different drive letter, compared to the booted OS files. The free space and a few other cursory tests indicated these were different volumes, and neither is an alias of hard drive space. You can eject the Vista DVD, insert other CDRs or DVDRs, and use them directly. I suspect the mOS runs from a RAM drive - and it worked quite happily in 512M RAM. What I didn't check was whether it uses a page file on the HD.
Vista development takes off from the most recent Server 2003 SP1 code base - and this is a good code base for a mOS, because it no longer resets the USB during the boot process, as XP SP2 does. So the odds are favorable for booting Vista mOS off USB flash drives, etc.
Unlike a Bart boot, Vista mOS will "see" USB flash drives inserted and changed on the fly - they don't have to be present at boot time, as they do with Bart, and swapping them is OK too. (Tip for Bart users; a memory card reader present at boot will generally allow hot-swapping of cards after boot - so I share SD cards between Bart sessions and my camera, instead of using slower and more write-limited flash drives).
Running tools from Bart CDR
Just for laffs, I ejected the Vista DVD and popped in my Bart CDR. The nu2menu (the standard "Start buttom" menu shell for Bart) worked fine, and many of the tools worked too. Because the Bart drive letter is not the same as the boot drive letter, my own "Is this booted or Autorun?" batch file logic, e.g....
Set Prog=Ad-Aware.exe
Set Launch=%~dp0..\RunScanner\RunScanner.exe
Set Opt=/m /t 0
...
If "%SystemDrive%"=="%~d0" (
Start %Launch% %Opt% %~dp0%Prog%
) Else (
Start %~dp0%Prog%
)
...concludes Bart is being run from the "native" system OS, and not as the booted OS.
That means my tools weren't running through RunScanner, which is probably prudent at this stage. Yes, that means registry-orientated tools such as AdAware or HiJackThis will not "see" the HD installation's registry, but until we know RunScanner and legacy registry access methods are compatible with Vista's registry, it's safer this way.
Many tools didn't work, because they relied on files and settings within the running OS. The Bart plugins for these tools would have included these in the Bart mOS, but that's not the OS that's in effect here - so if these resources aren't in the standard Vista code set, then the tools won't work. That's to be expected; after all, if I'd just scraped them onto Bart without using the plugin system, they wouldn't have worked there either.
All this testing was with the original Vista DVD - I haven't gone as far as building a new Vista mOS boot disk, nor have I explored "plugging in" tools as one does for Bart. I'm not sure if either of these things would be possible, or whether the answers would change between the build I tested and the final release.
Ah, for the time to really explore this stuff!
Conclusions
It's really good news to see a mOS for Vista, even if it's still not really orientated to mOS work. For example, it won't operate unless there's a visible Vista installation on the hard drive, and the RAM testing component writes to and then boots from the hard drive installation - both of which are bad practices when dealing with systems that are really ill.
I think this is because the mOS is still rooted in its origins as a "(p)re-install environment", originally intended for use on perfect fresh hardware. It was somewhat in response to this, as well as seeing some Bart off-shoots that also break some mOS best-practices, that prompted an earlier "How to design a maintenance OS" post in this blog.
The important things is that it's there, on the installation DVD (a break-through, if you'd ever peered longingly at MS WinPE though the previous licensing sphincter) and that the architecture seems fundamentally sound. It needs to be tested more rigorously to see how well it stays within the rules of mOS best practice, but it's already more than I'd dared hope for!
25 September 2006
Banking on Java
Way back in 2003, South African bank ABSA were in the news after customers had lost money through hacking. Here's a report from 21 July 2003 and another one with more detail. The story was that some uber-hackers robbed ABSA, were caught, and now Internet banking is safe again.
However, check out the detail on Bugbear B from June 2003; an in-the-wild malware that was noted to steal information from a number of banking domains in several different countries, including South Africa. Was there one uber-hacker attacking ABSA, or multiple tiny hacks by folks who figured out how to make use of Bugbear B?
The South African banking industry responded to the ABSA debacle by boasting new improvements in security, implying that what happened to ABSA would never happen at their bank. These improvements included on-screen mouse-driven number pad to avoid keylogging, and free (but UI-less and thus uncontrollable) MyCIO antivirus and firewall from McAfee.
At this point, the article you are reading is going to jump around seemingly-unrelated topics. Have faith; it will all come together at the end...
Microsoft Java
Sun sued Microsoft over the MS Java VM that was included in Windows and Internet Explorer, as Microsoft's Windows-specific extensions broke the "write once, run anywhere" goal of cross-platform usability. Sun contended that developers attracted to MS Java would be locked into Windows by these extensions.
Recently, I cleaned up an XP SP2 system that included Java malware, and which was running the old MS Java VM. I found instructions on removing MS Java, and the steps looked like those that should be done automatically by an uninstaller - if Microsoft had followed their own advice to developers and provided one for MS Java.
Not only did Microsoft provide no Add/Remove entry for the MS Java VM, but running one of the manual steps to remove it popped up a dialog box with the odd warning that "Internet Explorer will no longer be able to download from the World Wide Web". Now I can understand Java applets not working or pages being unable to display as the site intended, but not being able to do standard downloads? Smells like a smoking gun to me...
Sun Java
By now, most users of Java will be using Sun's Java Runtime Engine (JRE) instead of Microsoft's Java Virtual Machine (VM). We've also become accustomed to the need to fix code defects by updating subsystems such as Java, applying code patches, and so on.
A long-standing bone of contention with Sun has been that when you install a new JRE, the old one remains in place - and we suspected this old and vulnerable code could be used and thus exploited by java malware. We bitched about this all the way from 1.4.xx through 1.5.xx, and yet Sun just carried on installing new JREs while leaving old ones (at 100-150M apiece) in place.
It seemed that unlike Microsoft, Sun just didn't "get" what patching was all about. They seemed to think we downloaded and installed new JREs because we wanted kewl new features, and kept the old ones around for backward compatibility - whereas what we really want to do is smash this "backward compatibility" so that malware could not exploit flaws in the old versions.
Finally, Sun came clean and admitted what we'd always suspected; that a Java applet could specify which version of JRE it would like to be interpreted by, and the current version would obligingly hand off to the applet's JRE of choice.
Java malware
The first known Java virus was written in 1998, and detected as StrangeBrew. Since then, Java has been attacked and exploited in various ways, and both Microsoft and old Sun Java JREs are considered to be hi-risk exploitable surfaces. By now, Java malware abounds, and indeed there was such malware on the system I recently cleaned up. The beat goes on.
Note the dates involved in some of the above links, e.g. Sun JRE 1.4.2.xx was found to be exploitable way back in 2004 (the "Sun" link above) - as well as the versions that are vulnerable, such as 1.4.2_04.
Internet Banking in 2006
After cleaning up the system, I uninstalled MS Java VM, checked that no old Sun JREs were present, and installed Sun JRE 1.5.008 as the only Java engine on the system. After a while I had a call to say that Internet Banking wasn't working anymore.
Indeed, it wasn't working, so I called the bank's tech support, explained the system's history and why the MS Java VM had been removed, and they gave me a link to download a fix. The fix turned out to install the MS Java VM again, which I disallowed.
I called back to ask about an update that would work with current Sun Java, and they said yes, the newest version of the software no longer needs MS Java. I was a bit puzzled to hear it took them this long to switch, given that MS Java was pulled from XP in the days of SP1a, and SP1 is now so old that it's about to lose all further testing and patching, with SP2 as the new baseline.
So we rushed off to the city to collect an installation CD for their newest software, as it is not available as a download. This also did not work, and after another tech call, it turns out that this newest software does not support any Sun Java JRE beyond 1.5.005, so I was advised to fall back to that from the 1.5.008 that I was using.
I noticed that the new banking software installed Sun JRE 1.4.2_03, which is ancient and has been vulnerable to attack since 2004 at least. I uninstalled that old JRE when the banking software had finished installing, and after shutting down and restarting Windows, I tried the new banking software, which again failed to work.
After a bit of technical discussion, it turns out that the new banking software's real JRE threshold is in fact 1.4.2_03, and the only reason it "works" up to 1.5.005 is because it relies on these newer JREs to pass control back to 1.4.2_03.
This is really quite nasty, because users will think they are protected against Java exploits because they installed the latest JRE, while in fact the banking software is undermining this safety by slipstreaming in an old exploitable JRE. It makes a mockery of banking's usual assertion that they do their best to maintain security, but are let down by users who fail to keep their PCs safe and clean. There's something odd in being forced to accept an exploitability risk in order to use security-orientated software.
I haven't named the bank in question (it's not ABSA this time), because they are the only bank I've had reason to check out. For all I know, most or all of our local banks may be just as negligent, so it would be unfair to single out this one just because I found out about them first!
However, check out the detail on Bugbear B from June 2003; an in-the-wild malware that was noted to steal information from a number of banking domains in several different countries, including South Africa. Was there one uber-hacker attacking ABSA, or multiple tiny hacks by folks who figured out how to make use of Bugbear B?
The South African banking industry responded to the ABSA debacle by boasting new improvements in security, implying that what happened to ABSA would never happen at their bank. These improvements included on-screen mouse-driven number pad to avoid keylogging, and free (but UI-less and thus uncontrollable) MyCIO antivirus and firewall from McAfee.
At this point, the article you are reading is going to jump around seemingly-unrelated topics. Have faith; it will all come together at the end...
Microsoft Java
Sun sued Microsoft over the MS Java VM that was included in Windows and Internet Explorer, as Microsoft's Windows-specific extensions broke the "write once, run anywhere" goal of cross-platform usability. Sun contended that developers attracted to MS Java would be locked into Windows by these extensions.
Recently, I cleaned up an XP SP2 system that included Java malware, and which was running the old MS Java VM. I found instructions on removing MS Java, and the steps looked like those that should be done automatically by an uninstaller - if Microsoft had followed their own advice to developers and provided one for MS Java.
Not only did Microsoft provide no Add/Remove entry for the MS Java VM, but running one of the manual steps to remove it popped up a dialog box with the odd warning that "Internet Explorer will no longer be able to download from the World Wide Web". Now I can understand Java applets not working or pages being unable to display as the site intended, but not being able to do standard downloads? Smells like a smoking gun to me...
Sun Java
By now, most users of Java will be using Sun's Java Runtime Engine (JRE) instead of Microsoft's Java Virtual Machine (VM). We've also become accustomed to the need to fix code defects by updating subsystems such as Java, applying code patches, and so on.
A long-standing bone of contention with Sun has been that when you install a new JRE, the old one remains in place - and we suspected this old and vulnerable code could be used and thus exploited by java malware. We bitched about this all the way from 1.4.xx through 1.5.xx, and yet Sun just carried on installing new JREs while leaving old ones (at 100-150M apiece) in place.
It seemed that unlike Microsoft, Sun just didn't "get" what patching was all about. They seemed to think we downloaded and installed new JREs because we wanted kewl new features, and kept the old ones around for backward compatibility - whereas what we really want to do is smash this "backward compatibility" so that malware could not exploit flaws in the old versions.
Finally, Sun came clean and admitted what we'd always suspected; that a Java applet could specify which version of JRE it would like to be interpreted by, and the current version would obligingly hand off to the applet's JRE of choice.
Java malware
The first known Java virus was written in 1998, and detected as StrangeBrew. Since then, Java has been attacked and exploited in various ways, and both Microsoft and old Sun Java JREs are considered to be hi-risk exploitable surfaces. By now, Java malware abounds, and indeed there was such malware on the system I recently cleaned up. The beat goes on.
Note the dates involved in some of the above links, e.g. Sun JRE 1.4.2.xx was found to be exploitable way back in 2004 (the "Sun" link above) - as well as the versions that are vulnerable, such as 1.4.2_04.
Internet Banking in 2006
After cleaning up the system, I uninstalled MS Java VM, checked that no old Sun JREs were present, and installed Sun JRE 1.5.008 as the only Java engine on the system. After a while I had a call to say that Internet Banking wasn't working anymore.
Indeed, it wasn't working, so I called the bank's tech support, explained the system's history and why the MS Java VM had been removed, and they gave me a link to download a fix. The fix turned out to install the MS Java VM again, which I disallowed.
I called back to ask about an update that would work with current Sun Java, and they said yes, the newest version of the software no longer needs MS Java. I was a bit puzzled to hear it took them this long to switch, given that MS Java was pulled from XP in the days of SP1a, and SP1 is now so old that it's about to lose all further testing and patching, with SP2 as the new baseline.
So we rushed off to the city to collect an installation CD for their newest software, as it is not available as a download. This also did not work, and after another tech call, it turns out that this newest software does not support any Sun Java JRE beyond 1.5.005, so I was advised to fall back to that from the 1.5.008 that I was using.
I noticed that the new banking software installed Sun JRE 1.4.2_03, which is ancient and has been vulnerable to attack since 2004 at least. I uninstalled that old JRE when the banking software had finished installing, and after shutting down and restarting Windows, I tried the new banking software, which again failed to work.
After a bit of technical discussion, it turns out that the new banking software's real JRE threshold is in fact 1.4.2_03, and the only reason it "works" up to 1.5.005 is because it relies on these newer JREs to pass control back to 1.4.2_03.
This is really quite nasty, because users will think they are protected against Java exploits because they installed the latest JRE, while in fact the banking software is undermining this safety by slipstreaming in an old exploitable JRE. It makes a mockery of banking's usual assertion that they do their best to maintain security, but are let down by users who fail to keep their PCs safe and clean. There's something odd in being forced to accept an exploitability risk in order to use security-orientated software.
I haven't named the bank in question (it's not ABSA this time), because they are the only bank I've had reason to check out. For all I know, most or all of our local banks may be just as negligent, so it would be unfair to single out this one just because I found out about them first!
15 September 2006
How To Design a mOS
A maintenance OS (mOS) is one that you can use when you daren't trust your system to boot into the OS that is installed on it. Through the DOS and Win9x years, we were used to diskette-booted DOS in this role - but NTFS, > 137G hard drives, USB etc. make this less useful in XP.
As at September 2006, Microsoft provide no mOS for modern Windows, but you can build one for yourself by using Bart PE Builder (and perhaps you should!). Out of the box, Bart CDR meets the criteria for a safe mOS, but you can botch this when "enhancing" it.
There are all sorts of jobs one can do from a mOS, but mainly, it's:
Re-establishing safe functioning
Running a PC assumes various levels of functionality work perfectly. When a PC "doesn't work", one has to re-establish each of these in turn, before one can stand on each to reach the next. At each stage, one has to not use what cannot yet be trusted.
Is it safe to plug into the mains?
PCs with metallic rattles when shaken, may not be - a loose metal object could short out circuitry and burn it out. It's best to check inside the case for loose objects; salty wet dust; metal objects, flakes or rinds; power connecters danging onto pins on circuit boards,
and also that the power supply is set to the correct mains voltage, and that rain didn't fall into the case and power supply while the PC was being carried in.
Is the hardware logic safe?
This mainly goes about RAM, but implicit in a 12-hour RAM test is a test to see whether the PC can stay running that long, or will spontaneously reset or hang. The ideal RAM checker would also display processor and motherboard temperatures, and possibly operating voltages, best served with latched lowest and highest detected values.
Is the hard drive safe to use?
That goes about the physical condition of the hard drive, and is tested retrospectively by looking at the S.M.A.R.T. details, and also by test-reading every sector on the drive. It's important not to beat the drive to death; ideally, the surface test should avoid getting stuck in retry loops when a failing sector is encountered, and should abort when the first bad sector is found. The testing process should not attempt to "fix" anything!
Is the hard drive safe to write to?
Certain contexts (e.g. requests to recover deleted data) define the hard drive as being unsafe to write to, because material outside the file system's mapped space is not protected from overwrites. Otherwise, the drive may be considered safe for writes if the file system contains no physical errors, plus the hardware and physical hard drive must pass their tests.
Is the hard drive installation safe to run?
In addition to all of the above, this requires the presence of active malware to be excluded - and in practice, this may form the bulk of your mOS use. There are many challenges here, given that even a combination of anti-malware scanners is likely to miss some things that you'd have to look for and manage by hand.
Is it safe to network?
This goes about what's on the rest of the network (i.e. are all other computers on the LAN clean, and is WiFi allowing arbitrary computers to join this network?) and whether your system is adequately separated (NAT, firewall, patching of network edge code) from the 'net. The latter question has to be asked twice; for the mOS (if you are networking from it) and for the hard drive installation when this is finally booted again.
Boot safety
Many boot CDs are not safe, because they will automatically chain into booting the hard drive unless a key is pressed within a short time-out period. This is particularly dangerous, given that the chaining process ignores CMOS settings that would otherwise define what hard drives are visible, what device should boot next, or whether the hard drive should boot at all.
Every bootable Windows installation disk from Microsoft fails this test. Standard Bart PE is safe here, but has a plugin setting that can select the same automatic chaining to hard drive behavior. The Bart-based Avast! antivirus scanning CD enables this, and thus fails the test, as may other Bart-based boot disk projects.
Many mOS tasks take a lot of unattended clock time to run, starting with RAM testing, then hard drive surface testing, then virus scanning or searches for data to recover. If anything should cause the system to reset (remember, this is a sick PC being maintained) then it will fall through to boot the hard drive, thus running ?infected code in ?bad RAM that writes to an at-risk hard drive and file system. Disaster!
Even if you have tested RAM, hard drive etc. and now consider the hardware to be trustworthy, an unexpected reset will usually dispell that trust. The only safe thing for a mOS boot disk to do under such circumstances, is to stop and wait for a keypress (with no time-out fall-through).
It's tempting to have a mOS disk boot straight into a RAM check, as that's generally what one should do after unexpected lockups or resets, but that can make it easy to miss spontaneous resets during an overnight RAM test. You'd wake up, see the test still running and no errors found, but for all you know it may have reset and restarted the test a dozen times.
Testing RAM
At the time one tests RAM and perhaps core motherboard and processor logic, one can assume nothing to be safe. So the mOS and the programs you run from it should not write to the hard drive, or even read it (as a bad-RAM bit-flip can change a "read disk" to a "write disk").
I haven't figured out how to integrate RAM testers such as MemTest86, MemTest86+ , SIMMTester etc. into the same CDR as Bart, so I use a separate CDR for this. I then remove the CDR after it's booted and swap it for another that will boot but not access hard drive, such as a different RAM tester or a DOS boot CDR.
I'd love a RAM tester that showed system temperatures, but I haven't seen one that does.
Hardware compatibility
One would prefer a mOS that works on any hardware without having to have "special" drivers added to it, and Bart generally passes this test, unless oddball add-on hard drive cards or RAID are in use. Even S-ATA hard drives on the current i945 chipsets will work from Bart.
Bart will detect USB storage devices at boot time, but won't detect changes to these thereafter. So you'd have to insert a USB stick before boot, and not pull it out, swap it, add others, or add the same one back after changing the contents elsewhere. However, Bart treats card reader devices as containing removable "disks", so you can add and swap SD cards etc. quite happily. For this and other reasons, I generally use SD cards instead of USB sticks.
You cannot remove the Bart disk during a Bart session, and that means no burning to CDRs from most PCs.
Memory management
A mOS has to take no risks that are not initiated by the user, and on a sick PC, everything is a risk until testing and management re-establishes it as safe.
So a mOS should not make assumptions about the hard drive contents; automatically access, "grope" material or run code from the hard drive, or commence networking. That also means not using the hard drive for swapping to virtual memory or temp file workspace - and that makes memory management a challenge, especially when some of the available RAM is already used as a RAM drive.
A standard Bart CDR will create a small RAM drive and locate Temp files there, and will prompt before commencing networking. I've modified mine to leave networking inactive, and added on-demand facilities to change RAM drive size, relocate Temp location, create a page file on a selected hard drive volume, and start networking if required.
My usual SOP is then to divert Temp to a newly-created location on the hard drive, once I've tested the physical hard drive and logical file system. If RAM is low, I shrink the RAM disk and create a page file on the hard drive, before starting programs that will need Temp workspace (e.g. anti-malware scanners that extract archives to scan the contents).
Testing hard drive
The usual advice is to use hard drive vendors' tools, or ChkDsk /R. Neither are really acceptable, but for different reasons.
Hard drive vendor tools tend to display a summary S.M.A.R.T. report, which can be "OK" even when S.M.A.R.T. detail shows multiple failed sectors have been detected and "fixed". The surface scan may be useful, as long as it doesn't "fix" anything. Then there may be "deeper" tests that are data-destructive, such as "write zeros to disk" or a pseudo-"low level format".
ChkDsk /R is unacceptable because it's orientated to "fixing" things without prompting you for permission. First it tests the file system logic and "fixes" it, so that when it tests the surface of the disk, it can "fix" bad clusters by re-writing the contentrs elsewhere in the file system. All of which is unacceptably destructive if you'd rather have recovered data first.
Instead of these, I use HD Tune for Windows, which will run from Bart CDR just fine. It ignores the contents of the hard drive entirely, reports S.M.A.R.T. detail that is updated in real time even during other tests, can test hard drives over USB and memory cards (neither will show S.M.A.R.T.), and displays the hard drive's operating temperature (again, updated in real time) no matter which test is currently in progress.
Testing file system and data recovery
I haven't any good tools for NTFS, alas, so I use ChkDsk without any parameters that would cause it to "fix" anything. If the file system is FATxx and hard drive is < 137G, I prefer to use DOS mode Scandisk, as that allows interactive repair, and DiskEdit for when I'd rather do such repairs manually.
If data is to be recovered, I have a few semi-automatic tools in my Bart that are sometimes effective - but before using them, I prefer to copy off files and do a BING image backup of any NT-family partition that is to remain bootable.
I usually keep core user data on a 2G FAT16 volume, so if that requires data recovery, it's small enough to peel off as raw CDR-sized slabs using DiskEdit. I can then reformat the stricken data volume and get the PC back into the field, while I operate on the volume as pasted onto a different and working hard drive. FAT16's large data clusters mean any files that can fit in a single cluster, can be recovered intact even if the FATs are trashed.
Malware management
A mOS will often have to work on infected systems, so it must never run code from them unless the user explicitly initiates this. That requirement goes beyond not booting from the hard drive, to not including the hard drive in the Path, and not handling material on the hard drive in a "rich" enough way to expose exploitable surfaces.
A mOS should not "grope" the hard drive for other reasons, e.g. in case some of the material includes bad sectors that would bog the mOS down in retry loops, or cause it to crash on deranged file system logic. When your file manager of choice lists files, you want no cratching in file content for icons or metatdata.
Standard Bart is safe in this regard. There's no "desktop" in the hard drive file system sense, and the file managers that are included do not grope metadata when they "list" files. However, many Bart projects use XPE or similar to improve the UI by using Explorer.exe as the shell; I prefer not to do this, because doing so may expose exploitable surfaces.
A mOS should perform no automatic disk access - thus no indexing service, no System Restore, no resident antivirus and no thumbnailling.
Many malware scanners and integration checkers require registry access, and that is complicated when you have booted from a different OS installation. If simply used as-is, these tools would report results based on the Bart CDR's registry, not the one on the hard drive.
The solution for Bart is the RunScanner plugin. This redirects registry access to the hard drive installation for the tool that is run through it, but not child processes that this tool may launch. There are parameters to specify which hives to use, and to delay the swich from Bart to hard drive hives so that the tool can initialize itself according to the former before use on the latter.
Any tests that rely on run-time behavior (such as LSPFix, some driver and service managers, and most rootkit scanners) will not return meaningful results during a mOS session (unless you wish to test the behavior of the mOS). In particular, drivers and services may list a a mixture of "live" and registry-derived results, thus blending these from the mOS and hard drive. Interpret such results with care.
Any changes you make from mOS will not be monitored by the hard drive installation. This is generally desirable, as it prevents malware intervention, or Windows itself updating registry references so that malware may remain integrated. But it also means no System Restore undoability, and the quarantine material from various scanners may be lost, and/or not work when attempts are made to restore these later.
For this reason, I usually scan to kill when dealing with intrafile code infectors and other hard-core malware, but scan to detect only, when it comes to commercial malware that I expect to pose more problems due to botched removal than malicious persistence. I defer clean-up of those to a later Safe Mode Cmd Only boot, so that undoability is maintained.
When it comes to rootkits, these are exposed to normal scanning just like any other inert file. Tools that aim to detect rootkit behavior will not have any such behavior to detect, unless the mOS has triggered the malware into action. It can also help to save integration checks (such as HiJackThis or Nirsoft utility logs) as redirected by RunScanner and compare these with logs saved from Safe Mode or normal Windows. Unexplained differences may suggest rootkit activity during your "Safe" or normal Windows sessions, unless the mOS tests were done based on the mOS's registry rather than the hard drive's hives.
Beyond the mOS session
A mOS disk can be useful even when not being used as a mOS. For example, it can Autorun to provide tools for use from Windows, be used as storage space for updates and installables, and can operate as a diskette builder for tasks the mOS cannot do from itself.
As an example of the last, my own Bart CDR can spawn bootable diskettes for BING, RAM testers, and various DOS boot disks containing various tools. The DOS boot diskettes can then access the Bart CDR and thus extend the range of available tools via an appropriate Path.
I also set up my Bart so that I can test the menu UI against the output build, even before it is committed to disk, and the installation of some tools can double up to be run from both the host system and from Bart CDRs built from it. This is accomplished mainly by careful use of base-relative paths within the nu2menu (the native shell for standard Bart) and batch file logic.
I've found nu2menu to be useful in its own right, and use a stand-alone menu to manage the entire Bart-building process - updating the scanners, selecting wallpaper and UI button graphics, editing and testing the nu2menus, accessing Bart forums and plugin documentation, and building the CDRs themselves.
As at September 2006, Microsoft provide no mOS for modern Windows, but you can build one for yourself by using Bart PE Builder (and perhaps you should!). Out of the box, Bart CDR meets the criteria for a safe mOS, but you can botch this when "enhancing" it.
There are all sorts of jobs one can do from a mOS, but mainly, it's:
- Diagnostics
- Data recovery
- Malware management
Re-establishing safe functioning
Running a PC assumes various levels of functionality work perfectly. When a PC "doesn't work", one has to re-establish each of these in turn, before one can stand on each to reach the next. At each stage, one has to not use what cannot yet be trusted.
Is it safe to plug into the mains?
PCs with metallic rattles when shaken, may not be - a loose metal object could short out circuitry and burn it out. It's best to check inside the case for loose objects; salty wet dust; metal objects, flakes or rinds; power connecters danging onto pins on circuit boards,
and also that the power supply is set to the correct mains voltage, and that rain didn't fall into the case and power supply while the PC was being carried in.
Is the hardware logic safe?
This mainly goes about RAM, but implicit in a 12-hour RAM test is a test to see whether the PC can stay running that long, or will spontaneously reset or hang. The ideal RAM checker would also display processor and motherboard temperatures, and possibly operating voltages, best served with latched lowest and highest detected values.
Is the hard drive safe to use?
That goes about the physical condition of the hard drive, and is tested retrospectively by looking at the S.M.A.R.T. details, and also by test-reading every sector on the drive. It's important not to beat the drive to death; ideally, the surface test should avoid getting stuck in retry loops when a failing sector is encountered, and should abort when the first bad sector is found. The testing process should not attempt to "fix" anything!
Is the hard drive safe to write to?
Certain contexts (e.g. requests to recover deleted data) define the hard drive as being unsafe to write to, because material outside the file system's mapped space is not protected from overwrites. Otherwise, the drive may be considered safe for writes if the file system contains no physical errors, plus the hardware and physical hard drive must pass their tests.
Is the hard drive installation safe to run?
In addition to all of the above, this requires the presence of active malware to be excluded - and in practice, this may form the bulk of your mOS use. There are many challenges here, given that even a combination of anti-malware scanners is likely to miss some things that you'd have to look for and manage by hand.
Is it safe to network?
This goes about what's on the rest of the network (i.e. are all other computers on the LAN clean, and is WiFi allowing arbitrary computers to join this network?) and whether your system is adequately separated (NAT, firewall, patching of network edge code) from the 'net. The latter question has to be asked twice; for the mOS (if you are networking from it) and for the hard drive installation when this is finally booted again.
Boot safety
Many boot CDs are not safe, because they will automatically chain into booting the hard drive unless a key is pressed within a short time-out period. This is particularly dangerous, given that the chaining process ignores CMOS settings that would otherwise define what hard drives are visible, what device should boot next, or whether the hard drive should boot at all.
Every bootable Windows installation disk from Microsoft fails this test. Standard Bart PE is safe here, but has a plugin setting that can select the same automatic chaining to hard drive behavior. The Bart-based Avast! antivirus scanning CD enables this, and thus fails the test, as may other Bart-based boot disk projects.
Many mOS tasks take a lot of unattended clock time to run, starting with RAM testing, then hard drive surface testing, then virus scanning or searches for data to recover. If anything should cause the system to reset (remember, this is a sick PC being maintained) then it will fall through to boot the hard drive, thus running ?infected code in ?bad RAM that writes to an at-risk hard drive and file system. Disaster!
Even if you have tested RAM, hard drive etc. and now consider the hardware to be trustworthy, an unexpected reset will usually dispell that trust. The only safe thing for a mOS boot disk to do under such circumstances, is to stop and wait for a keypress (with no time-out fall-through).
It's tempting to have a mOS disk boot straight into a RAM check, as that's generally what one should do after unexpected lockups or resets, but that can make it easy to miss spontaneous resets during an overnight RAM test. You'd wake up, see the test still running and no errors found, but for all you know it may have reset and restarted the test a dozen times.
Testing RAM
At the time one tests RAM and perhaps core motherboard and processor logic, one can assume nothing to be safe. So the mOS and the programs you run from it should not write to the hard drive, or even read it (as a bad-RAM bit-flip can change a "read disk" to a "write disk").
I haven't figured out how to integrate RAM testers such as MemTest86, MemTest86+ , SIMMTester etc. into the same CDR as Bart, so I use a separate CDR for this. I then remove the CDR after it's booted and swap it for another that will boot but not access hard drive, such as a different RAM tester or a DOS boot CDR.
I'd love a RAM tester that showed system temperatures, but I haven't seen one that does.
Hardware compatibility
One would prefer a mOS that works on any hardware without having to have "special" drivers added to it, and Bart generally passes this test, unless oddball add-on hard drive cards or RAID are in use. Even S-ATA hard drives on the current i945 chipsets will work from Bart.
Bart will detect USB storage devices at boot time, but won't detect changes to these thereafter. So you'd have to insert a USB stick before boot, and not pull it out, swap it, add others, or add the same one back after changing the contents elsewhere. However, Bart treats card reader devices as containing removable "disks", so you can add and swap SD cards etc. quite happily. For this and other reasons, I generally use SD cards instead of USB sticks.
You cannot remove the Bart disk during a Bart session, and that means no burning to CDRs from most PCs.
Memory management
A mOS has to take no risks that are not initiated by the user, and on a sick PC, everything is a risk until testing and management re-establishes it as safe.
So a mOS should not make assumptions about the hard drive contents; automatically access, "grope" material or run code from the hard drive, or commence networking. That also means not using the hard drive for swapping to virtual memory or temp file workspace - and that makes memory management a challenge, especially when some of the available RAM is already used as a RAM drive.
A standard Bart CDR will create a small RAM drive and locate Temp files there, and will prompt before commencing networking. I've modified mine to leave networking inactive, and added on-demand facilities to change RAM drive size, relocate Temp location, create a page file on a selected hard drive volume, and start networking if required.
My usual SOP is then to divert Temp to a newly-created location on the hard drive, once I've tested the physical hard drive and logical file system. If RAM is low, I shrink the RAM disk and create a page file on the hard drive, before starting programs that will need Temp workspace (e.g. anti-malware scanners that extract archives to scan the contents).
Testing hard drive
The usual advice is to use hard drive vendors' tools, or ChkDsk /R. Neither are really acceptable, but for different reasons.
Hard drive vendor tools tend to display a summary S.M.A.R.T. report, which can be "OK" even when S.M.A.R.T. detail shows multiple failed sectors have been detected and "fixed". The surface scan may be useful, as long as it doesn't "fix" anything. Then there may be "deeper" tests that are data-destructive, such as "write zeros to disk" or a pseudo-"low level format".
ChkDsk /R is unacceptable because it's orientated to "fixing" things without prompting you for permission. First it tests the file system logic and "fixes" it, so that when it tests the surface of the disk, it can "fix" bad clusters by re-writing the contentrs elsewhere in the file system. All of which is unacceptably destructive if you'd rather have recovered data first.
Instead of these, I use HD Tune for Windows, which will run from Bart CDR just fine. It ignores the contents of the hard drive entirely, reports S.M.A.R.T. detail that is updated in real time even during other tests, can test hard drives over USB and memory cards (neither will show S.M.A.R.T.), and displays the hard drive's operating temperature (again, updated in real time) no matter which test is currently in progress.
Testing file system and data recovery
I haven't any good tools for NTFS, alas, so I use ChkDsk without any parameters that would cause it to "fix" anything. If the file system is FATxx and hard drive is < 137G, I prefer to use DOS mode Scandisk, as that allows interactive repair, and DiskEdit for when I'd rather do such repairs manually.
If data is to be recovered, I have a few semi-automatic tools in my Bart that are sometimes effective - but before using them, I prefer to copy off files and do a BING image backup of any NT-family partition that is to remain bootable.
I usually keep core user data on a 2G FAT16 volume, so if that requires data recovery, it's small enough to peel off as raw CDR-sized slabs using DiskEdit. I can then reformat the stricken data volume and get the PC back into the field, while I operate on the volume as pasted onto a different and working hard drive. FAT16's large data clusters mean any files that can fit in a single cluster, can be recovered intact even if the FATs are trashed.
Malware management
A mOS will often have to work on infected systems, so it must never run code from them unless the user explicitly initiates this. That requirement goes beyond not booting from the hard drive, to not including the hard drive in the Path, and not handling material on the hard drive in a "rich" enough way to expose exploitable surfaces.
A mOS should not "grope" the hard drive for other reasons, e.g. in case some of the material includes bad sectors that would bog the mOS down in retry loops, or cause it to crash on deranged file system logic. When your file manager of choice lists files, you want no cratching in file content for icons or metatdata.
Standard Bart is safe in this regard. There's no "desktop" in the hard drive file system sense, and the file managers that are included do not grope metadata when they "list" files. However, many Bart projects use XPE or similar to improve the UI by using Explorer.exe as the shell; I prefer not to do this, because doing so may expose exploitable surfaces.
A mOS should perform no automatic disk access - thus no indexing service, no System Restore, no resident antivirus and no thumbnailling.
Many malware scanners and integration checkers require registry access, and that is complicated when you have booted from a different OS installation. If simply used as-is, these tools would report results based on the Bart CDR's registry, not the one on the hard drive.
The solution for Bart is the RunScanner plugin. This redirects registry access to the hard drive installation for the tool that is run through it, but not child processes that this tool may launch. There are parameters to specify which hives to use, and to delay the swich from Bart to hard drive hives so that the tool can initialize itself according to the former before use on the latter.
Any tests that rely on run-time behavior (such as LSPFix, some driver and service managers, and most rootkit scanners) will not return meaningful results during a mOS session (unless you wish to test the behavior of the mOS). In particular, drivers and services may list a a mixture of "live" and registry-derived results, thus blending these from the mOS and hard drive. Interpret such results with care.
Any changes you make from mOS will not be monitored by the hard drive installation. This is generally desirable, as it prevents malware intervention, or Windows itself updating registry references so that malware may remain integrated. But it also means no System Restore undoability, and the quarantine material from various scanners may be lost, and/or not work when attempts are made to restore these later.
For this reason, I usually scan to kill when dealing with intrafile code infectors and other hard-core malware, but scan to detect only, when it comes to commercial malware that I expect to pose more problems due to botched removal than malicious persistence. I defer clean-up of those to a later Safe Mode Cmd Only boot, so that undoability is maintained.
When it comes to rootkits, these are exposed to normal scanning just like any other inert file. Tools that aim to detect rootkit behavior will not have any such behavior to detect, unless the mOS has triggered the malware into action. It can also help to save integration checks (such as HiJackThis or Nirsoft utility logs) as redirected by RunScanner and compare these with logs saved from Safe Mode or normal Windows. Unexplained differences may suggest rootkit activity during your "Safe" or normal Windows sessions, unless the mOS tests were done based on the mOS's registry rather than the hard drive's hives.
Beyond the mOS session
A mOS disk can be useful even when not being used as a mOS. For example, it can Autorun to provide tools for use from Windows, be used as storage space for updates and installables, and can operate as a diskette builder for tasks the mOS cannot do from itself.
As an example of the last, my own Bart CDR can spawn bootable diskettes for BING, RAM testers, and various DOS boot disks containing various tools. The DOS boot diskettes can then access the Bart CDR and thus extend the range of available tools via an appropriate Path.
I also set up my Bart so that I can test the menu UI against the output build, even before it is committed to disk, and the installation of some tools can double up to be run from both the host system and from Bart CDRs built from it. This is accomplished mainly by careful use of base-relative paths within the nu2menu (the native shell for standard Bart) and batch file logic.
I've found nu2menu to be useful in its own right, and use a stand-alone menu to manage the entire Bart-building process - updating the scanners, selecting wallpaper and UI button graphics, editing and testing the nu2menus, accessing Bart forums and plugin documentation, and building the CDRs themselves.
10 September 2006
"...but you're Not a Programmer"
Never trust a programmer who says something can't be done (so don't worry about it)...
When programmers say something can't be done, they mean they can't see a way to do it - and after all, they made the code, so surely they would know, right?
When an interested non-programmer asks themselves if something can be done, they work from a higher level of abstraction, disregarding the details of how it might be done.
The programmer's views are informed by the intended behaviour of what they made, and may be blind to the full range of possible behaviors.
Look at the track record of exploitability that results from design safety failure; the MS Office macro malware generation, the email script generation, malware like Melissa that scripts Outlook to send itself out, and so on.
The stupidity/perfidity question (see previous blog entry, it's not Googleable yet) arises at this point, but either way, the result is the same; trust in these programmers may be misplaced. Either they weren't aware of the implications of what they created, and are thus liklely to fail the lower levels of the Trust Stack, or they have a hidden agenda that fails the upper levels of that stack.
Either way, I wouldn't stop worrying because they tell me to.
When programmers say something can't be done, they mean they can't see a way to do it - and after all, they made the code, so surely they would know, right?
When an interested non-programmer asks themselves if something can be done, they work from a higher level of abstraction, disregarding the details of how it might be done.
The programmer's views are informed by the intended behaviour of what they made, and may be blind to the full range of possible behaviors.
Look at the track record of exploitability that results from design safety failure; the MS Office macro malware generation, the email script generation, malware like Melissa that scripts Outlook to send itself out, and so on.
The stupidity/perfidity question (see previous blog entry, it's not Googleable yet) arises at this point, but either way, the result is the same; trust in these programmers may be misplaced. Either they weren't aware of the implications of what they created, and are thus liklely to fail the lower levels of the Trust Stack, or they have a hidden agenda that fails the upper levels of that stack.
Either way, I wouldn't stop worrying because they tell me to.
Mistake or Malice?
I was going to call this "Perfidity or Stupidity", until I saw the lean number of Google hits for "perfidity", and that Chambers Dictionary can't find the word. In any case, it may be better to avoid the perjurative aspects of "stupidity" :-)
We perform the Turing Test every day (and often lose) whenever we have to consider whether material is from a human (e.g. email from a user) or bot (e.g. email form a user's infected computer). This is a generalized identity/category test, similar to "is this my bank's site, or is it a phishing site?"
When we find something that sucks (or is downright dangerous) we also ask ourselves; were they stupid and did this by accident, or are they perfidious and did this to further a hidden and possibly malicious agenda?
This question runs as a vertical slash through the Trust Stack. Things that would be errors in the lower levels of the stack if there by mistake, would in fact be a failure in the top levels of the stack if they were there intentionally. This applies particularly at the safety and desgn layer of the stack, which may be where most exploitability occurs.
We perform the Turing Test every day (and often lose) whenever we have to consider whether material is from a human (e.g. email from a user) or bot (e.g. email form a user's infected computer). This is a generalized identity/category test, similar to "is this my bank's site, or is it a phishing site?"
When we find something that sucks (or is downright dangerous) we also ask ourselves; were they stupid and did this by accident, or are they perfidious and did this to further a hidden and possibly malicious agenda?
This question runs as a vertical slash through the Trust Stack. Things that would be errors in the lower levels of the stack if there by mistake, would in fact be a failure in the top levels of the stack if they were there intentionally. This applies particularly at the safety and desgn layer of the stack, which may be where most exploitability occurs.
07 September 2006
DRM Revocation List
DRM is inherently user-hostile, acting against user interests under a cover of stealth and mystery. As such, I'd classify it as commercial malware. It's politically significant as by design, it facilitates control over users' digital resources to be exercised by global agencies. So there are problems at the top of the "trust stack" - but this post isn't about that.
One of the features of DRM is the revocation list concept. This is a list of applications approved to work with DRM-protected material, and the EUL"A" will typically allow this list to be updated in real time as the list's originators see fit.
The idea is that if media playing device X was cracked to subvert DRM protection of content Y, then the ability to use device X would be revoked.
For now, I'll leave aside the obvious questions, such as:
For example, if a media provider was caught dropping open-use rootkits from "audio CDs" (hey, that would never happen, right?) one possible remedy would be to revoke all of that provider's rights over all of their material. In essence, such a provider would be found unfit to exert any sort of control over any users, and be swept off the DRM playing field.
Obviously, this would materially reduce the value of that media provider to the artists who are contracted to it - in effect, the penalty undermines the contract with the artist, because the provider can no longer protect the artist's content. So for a certain period (say, a year) the artist has the right to drop their obligations to the provider and seek a new contract elsewhere. The reverse right does not apply, i.e. the provider cannot drop the artist if the artist chooses to stay.
Further, it has to be accepted that all existing protected material from that provider is now unprotected - so for a similar period, artists can sue the provider for damages, either as a class group or outside of any class action.
If this would seem to tip the scales to the extent that the media provider's business would be smashed, then fine. After all, were the provider to be an individual caught "hacking", they'd likely lose their livelyhood and do jail time - why should larger-scale criminals get off more leniently? Do we really want to leave known exploiters in the provider pool?
I'll bet there's no plans in place to use DRM revocation lists to defend users' rights in this manner, even though it's technically feasable. That speaks volumes on why one should IMO reject this level of real-time DRM intrusion. On the other hand, once you open up DRM revocation for broader use, why not use it to apply global government censorship, etc.? After all, there's nothing to limit it within the borders of any particular jurisdiction.
One of the features of DRM is the revocation list concept. This is a list of applications approved to work with DRM-protected material, and the EUL"A" will typically allow this list to be updated in real time as the list's originators see fit.
The idea is that if media playing device X was cracked to subvert DRM protection of content Y, then the ability to use device X would be revoked.
For now, I'll leave aside the obvious questions, such as:
- Who controls the list?
- Who else controls the list, i.e. as associates or legally-mandated?
- Who else controls the list, by hacking into the list updates?
- How well-bounded is the list mechanism to what it is supposed to do?
- Who is accepted as a DRM content provider, and on what basis?
For example, if a media provider was caught dropping open-use rootkits from "audio CDs" (hey, that would never happen, right?) one possible remedy would be to revoke all of that provider's rights over all of their material. In essence, such a provider would be found unfit to exert any sort of control over any users, and be swept off the DRM playing field.
Obviously, this would materially reduce the value of that media provider to the artists who are contracted to it - in effect, the penalty undermines the contract with the artist, because the provider can no longer protect the artist's content. So for a certain period (say, a year) the artist has the right to drop their obligations to the provider and seek a new contract elsewhere. The reverse right does not apply, i.e. the provider cannot drop the artist if the artist chooses to stay.
Further, it has to be accepted that all existing protected material from that provider is now unprotected - so for a similar period, artists can sue the provider for damages, either as a class group or outside of any class action.
If this would seem to tip the scales to the extent that the media provider's business would be smashed, then fine. After all, were the provider to be an individual caught "hacking", they'd likely lose their livelyhood and do jail time - why should larger-scale criminals get off more leniently? Do we really want to leave known exploiters in the provider pool?
I'll bet there's no plans in place to use DRM revocation lists to defend users' rights in this manner, even though it's technically feasable. That speaks volumes on why one should IMO reject this level of real-time DRM intrusion. On the other hand, once you open up DRM revocation for broader use, why not use it to apply global government censorship, etc.? After all, there's nothing to limit it within the borders of any particular jurisdiction.
27 August 2006
Safety First
Personal computers have gone from geek hobby, to useful private tools, to ubiquitous globally-connected life repositories. Today, we're as likely to conduct finance and store memories on the PC as we are to cruise around the web - on the same PC.
That means "make it easy to use" should change to "make it easy to use safety". Yet the level of knowledge needed to use the PC is way lower than skills needed to use it safely, and IMO it borders on criminal negligence to deepen that trend. That's like making handguns lighter with less trigger pull required so that toddlers could use them "more easily".
To use a PC...
...you need to know how to press a button, click one of two mouse buttons, and familiarity with the alphabet so you can type. It's useful to know about "folders", but Vista seeks to remove even that semi-requirement. If you can click what you see, you can use the PC.
To use a PC safely...
...you need to know about file types and the levels of risk they represent, and that information is hidden from you by default. In fact, the UI that makes things "so easy" does nothing to help you assess risk, nor is it constrained to act within the risk indicators it displays.
You also need an unhealthy amount of de-spin and paranoia. Almost everything you see has to be reversed through the mirror of suspicion; "value" isn't, "free" can gouge you, "click to unsubscribe" means "don't click else you'll get more spam", and so on. The endless cynicism and lies can be damaging to the psyche, and I often wonder if usability studies into UI stress ever take this factor into account.
What we need to know
You wouldn't dream of wiring house so that it wasn't possible to know what sockets and wires were "live" or not, nor would you make firearms such that it was impossible to tell if they were loaded or not, had the safety catch on or not, or which way they were pointing.
So why do we accept computers that use the meaningless term "open" that hides what a file can do when used? Why do we use an interface that makes no distinction between what is on our PC and what is from some arbitrary system out on the 'net?
The basic things we need to know at all times are:
As owners of our own PCs, we have the right to whatever we like with any file on our systems. We may not expect that right when we use our employer's PC at the workplace, but at home, there is no-one who should override our control.
History
In the old days of DOS, you had to know more to use a PC, but beyond that, all you needed to know was not to run files with names ending in .exe, .com or .bat or boot off strange disks. Hidden files weren't that hidden, and it was quite easy to manage risky files because they wouldn't be run unless triggered from one of two editable files. Only when viruses injected themselves into existing code files or boot code, did one need antivirus tools to clean up.
The first safety failure was loss of the data/code distinction, when Windows 95 hid file name extensions by default, and when MS Office applications started auto-running macros within "data" files. Windows 95 also hid hidden files, as well as where you were in the file system.
The second safety failure was when Internet Explorer 4 shell integration blurred the distinction between what was on your PC and what was not. Local material was often presented in a web-like way, while the local file browser could seamlessly display off-PC content. The new web standards also allowed web sites to spawn dialog boxes that looked as if they were part of the local system, as well as drop and run code on visitors' computers.
The third safety failure includes all mechanisms whereby code can run without user consent; from CDs that autorun when inserted, to code that gropes inside file content when all we wanted was to see a list of files, to "network services" that allow any entity on the Internet to silently initiate a machine dialog, as exploited by the Slammer, Lovesan and Sasser generations.
The fourth safety failure will be a loss of awareness as to where we are within the file system. As long as different files in different parts of the file system can present themselves as being "the same", we need to know the full and unambiguous path to a file to know which it is.
Vista
Vista tries to make computing easier by dumbing down "where things are", but makes "safe hex" as difficult as ever. File name extensions and paths are still hidden, as are hidden files and ADS. You still need to know an arcane list of file name extensions, you still need to bang the UI to actually show you these, and if anything the OS is more likely to ignore the extension when "opening" the file, acting on embedded information hidden within the file.
Just as the web enraptured Microsoft in the days of Internet Explorer 4, so "search" is enrapturing them now. Today's users may rarely type in a URL to reach a site; they are more likely to search for things via Google, and Vista brings the same "convenience" to your own PC. You're encouraged to ignore the namespace tree of locations and names, and simply type what you want so that the OS can guess what you want and "open" it for you.
The other growing risk in Vista, is that of automatic metadata processing. The converse of "any non-trivial code has bugs" is "if you want bugless code, keep it trivial". The traditional DOS directory entry is indeed trivial enough to be pretty safe, but I suspect the richer metadata embraced by NTFS is non-trivial enough to offer exploit opportunities - and that's before you factor in 3rd-party extensibility and malicious "metadata handlers".
Vista continues the trend of XP in that metadata and actual file content may be groped when you display a list of files, or when you do nothing at all (think thumbnailers, indexers etc.). If something manages to exploit these automatically-exposed surfaces, it allows loose malware files to run without any explicit integration you might detect and manage using tools such as HiJackThis or MSConfig. Removing such files may be impossible, if all possible OSs that can read the file system are also exploitable by the malicious content.
Exploitability
By now, we know that any code can be found to be exploitable, so that actual outcome of contact with material may bear no resemblence to what the code was supposed to do with it. Some have suggested this means we should abandon any pretence at a data/code distinction, and treat all material as if it posed the high risk of code.
IMO, that's a fatuous approach. Use of the Internet involves interaction with strangers, where identity is not only unprovable, but meaningless. That requires us to safely deal with content from arbitrary sources; only when we initiate a trust relationship (e.g. by logging in to a specific site) does identity start to mean something.
Instead, the message I take home from this is that any subsystem may need to be amputated at any time - including particular file types, irrespective of how safe they are supposed to be. For example, if .RTF files are found to be exploitable, I'd want to elevate expected risk of .RTF to that of code files until I know the risk is patched.
A pervasive awareness of exploitability dictates the following:
Making safety easier
Vista tries hard in the wrong places (user rights), though that approach is becoming more appropriately tuned - but that's another subject! What we need is:
Run vs. View or Edit
Let's see the death of "open"; it means nothing, in a context where we need meaning.
First, we need to re-create a simple data vs. code distinction, and force the OS to respect this so that we as users can trust what is shown to us.
Every time material is shown to use in a context that allows us to interact with it, we should be shown whether it is code or data. It's no use hiding this as a pop-up toolbar, extra column in detail view, some peripheral text in a status bar, or requiring a right-click and Properties.
Then we need to use terms such as Run or Launch to imply code behavior, as opposed to View or Edit to imply data behavior. You could View or Edit code material too, but doing so would not run it!
It would also help to show the file type as well, so that if a type that should be "data" becomes "code" due to code exploitability, we could avoid that risk. It's important that the system derives this type information in a trivial way (i.e. no deep metadata digging) and respects it (i.e. material is always handled as the type shown).
Safe handling and context awareness
Microsoft has juggled with various "My..." concepts for a while now, but there's no safety aspect to this as yet. Indeed, Microsoft encourages you to mix arbitrary downloads and incoming attachments with your own data files, as well as recommending the storage of infectable code fiels within "My Documents" as a way of hiding them from System Restore.
What we need is a new clue; that incoming material and infectable files are not safe to treat as if they were data files, nor should they be mixed with your data files that would be restored in the case of some system melt-down. I've applied this clue for many years now, and it does make system management a lot easier.
Once you herd all incoming and risky material into one subtree, you can add safer behaviors for that subtree - such as always showing file name extensions and paths, and never digging into metadata even to display imbedded icons.
These safer behaviours can be wrapped up as a "Safe View" mode, which can then be automatically applied to other hi-risk contexts, such as when new drives are discovered, or the system is operated in Safe Mode, or when one is running the maintenance OS from DVD boot.
Change the mindset
Currently, we encourage newbies to jump in and use everything. Then we suggest to interested survivors that they learn and apply some safety tips.
Newbies may see a suggestion to turn on the firewall, install and update an antivirus scanner, and swallow patches automatically - but we don't talk about file type risks, and we encourage them to send attachments without suggesting they should avoid doing so.
IMO, the first mention of sending email to more than one recipient should explain and recommend the use of BCC:, and users who know nothing about file types or the need for meaningful descriptive message text should not be shown how to send attachments.
In other words, safety should be learned at the same time as how to do things, rather than offered as an afterthought, and it should be as easy to operate a PC safely as it is to operate it at all.
That means "make it easy to use" should change to "make it easy to use safety". Yet the level of knowledge needed to use the PC is way lower than skills needed to use it safely, and IMO it borders on criminal negligence to deepen that trend. That's like making handguns lighter with less trigger pull required so that toddlers could use them "more easily".
To use a PC...
...you need to know how to press a button, click one of two mouse buttons, and familiarity with the alphabet so you can type. It's useful to know about "folders", but Vista seeks to remove even that semi-requirement. If you can click what you see, you can use the PC.
To use a PC safely...
...you need to know about file types and the levels of risk they represent, and that information is hidden from you by default. In fact, the UI that makes things "so easy" does nothing to help you assess risk, nor is it constrained to act within the risk indicators it displays.
You also need an unhealthy amount of de-spin and paranoia. Almost everything you see has to be reversed through the mirror of suspicion; "value" isn't, "free" can gouge you, "click to unsubscribe" means "don't click else you'll get more spam", and so on. The endless cynicism and lies can be damaging to the psyche, and I often wonder if usability studies into UI stress ever take this factor into account.
What we need to know
You wouldn't dream of wiring house so that it wasn't possible to know what sockets and wires were "live" or not, nor would you make firearms such that it was impossible to tell if they were loaded or not, had the safety catch on or not, or which way they were pointing.
So why do we accept computers that use the meaningless term "open" that hides what a file can do when used? Why do we use an interface that makes no distinction between what is on our PC and what is from some arbitrary system out on the 'net?
The basic things we need to know at all times are:
- Whether a file is "code" or "data"
- Whether something is on our PC or from outside the PC
- Where we are in the file system
As owners of our own PCs, we have the right to whatever we like with any file on our systems. We may not expect that right when we use our employer's PC at the workplace, but at home, there is no-one who should override our control.
History
In the old days of DOS, you had to know more to use a PC, but beyond that, all you needed to know was not to run files with names ending in .exe, .com or .bat or boot off strange disks. Hidden files weren't that hidden, and it was quite easy to manage risky files because they wouldn't be run unless triggered from one of two editable files. Only when viruses injected themselves into existing code files or boot code, did one need antivirus tools to clean up.
The first safety failure was loss of the data/code distinction, when Windows 95 hid file name extensions by default, and when MS Office applications started auto-running macros within "data" files. Windows 95 also hid hidden files, as well as where you were in the file system.
The second safety failure was when Internet Explorer 4 shell integration blurred the distinction between what was on your PC and what was not. Local material was often presented in a web-like way, while the local file browser could seamlessly display off-PC content. The new web standards also allowed web sites to spawn dialog boxes that looked as if they were part of the local system, as well as drop and run code on visitors' computers.
The third safety failure includes all mechanisms whereby code can run without user consent; from CDs that autorun when inserted, to code that gropes inside file content when all we wanted was to see a list of files, to "network services" that allow any entity on the Internet to silently initiate a machine dialog, as exploited by the Slammer, Lovesan and Sasser generations.
The fourth safety failure will be a loss of awareness as to where we are within the file system. As long as different files in different parts of the file system can present themselves as being "the same", we need to know the full and unambiguous path to a file to know which it is.
Vista
Vista tries to make computing easier by dumbing down "where things are", but makes "safe hex" as difficult as ever. File name extensions and paths are still hidden, as are hidden files and ADS. You still need to know an arcane list of file name extensions, you still need to bang the UI to actually show you these, and if anything the OS is more likely to ignore the extension when "opening" the file, acting on embedded information hidden within the file.
Just as the web enraptured Microsoft in the days of Internet Explorer 4, so "search" is enrapturing them now. Today's users may rarely type in a URL to reach a site; they are more likely to search for things via Google, and Vista brings the same "convenience" to your own PC. You're encouraged to ignore the namespace tree of locations and names, and simply type what you want so that the OS can guess what you want and "open" it for you.
The other growing risk in Vista, is that of automatic metadata processing. The converse of "any non-trivial code has bugs" is "if you want bugless code, keep it trivial". The traditional DOS directory entry is indeed trivial enough to be pretty safe, but I suspect the richer metadata embraced by NTFS is non-trivial enough to offer exploit opportunities - and that's before you factor in 3rd-party extensibility and malicious "metadata handlers".
Vista continues the trend of XP in that metadata and actual file content may be groped when you display a list of files, or when you do nothing at all (think thumbnailers, indexers etc.). If something manages to exploit these automatically-exposed surfaces, it allows loose malware files to run without any explicit integration you might detect and manage using tools such as HiJackThis or MSConfig. Removing such files may be impossible, if all possible OSs that can read the file system are also exploitable by the malicious content.
Exploitability
By now, we know that any code can be found to be exploitable, so that actual outcome of contact with material may bear no resemblence to what the code was supposed to do with it. Some have suggested this means we should abandon any pretence at a data/code distinction, and treat all material as if it posed the high risk of code.
IMO, that's a fatuous approach. Use of the Internet involves interaction with strangers, where identity is not only unprovable, but meaningless. That requires us to safely deal with content from arbitrary sources; only when we initiate a trust relationship (e.g. by logging in to a specific site) does identity start to mean something.
Instead, the message I take home from this is that any subsystem may need to be amputated at any time - including particular file types, irrespective of how safe they are supposed to be. For example, if .RTF files are found to be exploitable, I'd want to elevate expected risk of .RTF to that of code files until I know the risk is patched.
A pervasive awareness of exploitability dictates the following:
- No system-initiated handling of arbitrary material
- Strict file type discipline, i.e. abort rather than "open" any mis-labeled content
Making safety easier
Vista tries hard in the wrong places (user rights), though that approach is becoming more appropriately tuned - but that's another subject! What we need is:
Run vs. View or Edit
Let's see the death of "open"; it means nothing, in a context where we need meaning.
First, we need to re-create a simple data vs. code distinction, and force the OS to respect this so that we as users can trust what is shown to us.
Every time material is shown to use in a context that allows us to interact with it, we should be shown whether it is code or data. It's no use hiding this as a pop-up toolbar, extra column in detail view, some peripheral text in a status bar, or requiring a right-click and Properties.
Then we need to use terms such as Run or Launch to imply code behavior, as opposed to View or Edit to imply data behavior. You could View or Edit code material too, but doing so would not run it!
It would also help to show the file type as well, so that if a type that should be "data" becomes "code" due to code exploitability, we could avoid that risk. It's important that the system derives this type information in a trivial way (i.e. no deep metadata digging) and respects it (i.e. material is always handled as the type shown).
Safe handling and context awareness
Microsoft has juggled with various "My..." concepts for a while now, but there's no safety aspect to this as yet. Indeed, Microsoft encourages you to mix arbitrary downloads and incoming attachments with your own data files, as well as recommending the storage of infectable code fiels within "My Documents" as a way of hiding them from System Restore.
What we need is a new clue; that incoming material and infectable files are not safe to treat as if they were data files, nor should they be mixed with your data files that would be restored in the case of some system melt-down. I've applied this clue for many years now, and it does make system management a lot easier.
Once you herd all incoming and risky material into one subtree, you can add safer behaviors for that subtree - such as always showing file name extensions and paths, and never digging into metadata even to display imbedded icons.
These safer behaviours can be wrapped up as a "Safe View" mode, which can then be automatically applied to other hi-risk contexts, such as when new drives are discovered, or the system is operated in Safe Mode, or when one is running the maintenance OS from DVD boot.
Change the mindset
Currently, we encourage newbies to jump in and use everything. Then we suggest to interested survivors that they learn and apply some safety tips.
Newbies may see a suggestion to turn on the firewall, install and update an antivirus scanner, and swallow patches automatically - but we don't talk about file type risks, and we encourage them to send attachments without suggesting they should avoid doing so.
IMO, the first mention of sending email to more than one recipient should explain and recommend the use of BCC:, and users who know nothing about file types or the need for meaningful descriptive message text should not be shown how to send attachments.
In other words, safety should be learned at the same time as how to do things, rather than offered as an afterthought, and it should be as easy to operate a PC safely as it is to operate it at all.
19 August 2006
The Trust Stack
Some readers will be familiar with the OSI network stack model, which helps clarify issues when troubleshooting network or connectivity problems. I propose a similar 7-layer stack for evaluating trustworthiness within Information Technology contexts.
Each layer rests on the layer below it, and cannot be effective if a lower layer fails. Conversely, if the top layer is rotten, then all they layers below are no longer relevant.
Goals
Are the goals of your vendor compatible with your own, or are they contrary to these? The adage "he who pays the piper, calls the tune" applies here; if it is not you who provide the vendor's income stream, then it's not likely to be your needs that are uppermost in the vendor's mind. Even if the vendor derives income from you, the vendor may afford to ignore your needs if you are perceived to have no choice other than to buy their product.
Intention
What is the intention of the specific thing you are evaluating? If it is intended to do something that is contrary to your interests, then at best it can be trusted only to work against your interests in the way intended.
Policy
The vendor may commit itself to policies (e.g. a privacy policy) or may be compelled to act within policies laid down by law. For example, a privacy policy may defines what the vendor would do with your data, were they to be the sole agent with access to it.
Security
This goes about limiting who has what abilities within the system. For example, a privacy policy is meaningless if entities other than the vendor also have access to data held by their system.
Safety
This goes about the level of risk (or range of possible consequences) within the system, and whether this is constrained to user's expectations. It's no use securing access so that only trusted employees can operate the system, if the system takes greater risks than the trusted employees expect when they operate it.
Sanity
This goes about whether the system acts as it was designed to do, or whether defects create opportunities for it to act completely differently. For example, a defective JPG handler can escalate the risk of handling "graphic data" to running raw code; something that bears no resemblance to what the handler was created to do.
Granularity
The above six layers are top-down, though some contexts may make more sense if Policy is considered to run above Intention. The seventh layer is different; it rides next to everything else, and each instance encompases all of the other six layers.
That's because the vendor can open the system out to additional players at every level. There may be co-owners (or successive owners, e.g. after a buy-out) with different goals; different coding teams may have different intentions, different departments or legislation may stipulate different policies, and the actual code may re-use modules developed by different vendors.
In addition to problems within each of these players, problems can arise at the interface between them. In an earlier blog post, I mentioned the rule that "users know less than you think", i.e. no matter how little you expect users to understand about your product, they will understand (or care) even less. This applies not only between end-user and product, but between each coding level and the objects re-used by those coders.
Trusted Computing
Now apply these tests to the concept of "Trusted Computing". The vendor who coined the phrase derives income from us who pay for Windows, but is the monopoly provider of the OS required to run applications written for it. The stated goals and intentions of the OS are to leverage interests of certain business partners over ours via DRM; in fact, "Trusted Computing" initially meant that media corporations could "trust" user's systems to be constrained from violating corporate interests.
So already, we have problems at the top of the trust stack, especially when we look at the track record of previous behavior. We also have a top-level granularity problem; the OS vendor empowers a class of "media providers" to leverage their rights over ours.
How is this class of "media providers" bounded? Free speech requires anyone to be accepted as a provider of content, which means we're expected to trust anyone to have rights that trump our own on our own systems. Or you could constrain these powers to a small cartel of well-resourced corporations, trading freedom of expression for putative trustworthiness of computing. Given that one of the largest media corporations has already been caught dropping rootkits onto PCs from "audio" CDs, I don't have much faith there.
If you look at the problem from the bottom up, it doesn't get better - the raw materials out of which "trusted computing" is to be built, are already so failure-prone as to require regular repairs that are limited to monthly patching for convenience.
Bottom line
We already trust computing, even though evidence proves it's unworthy of trust.
When we allow software to download and apply patches without explicitly reviewing or testing these, we break the best practice of allowing no unauthorised changes to be made to the system. Why would we give blanket authorization to any patches the vendor chooses to push?
It isn't because we trust the vendor's goals and intentions, given the vendor has already been caught pushing through user-hostile code (Genuine Windows Notification) as a "critical update".
And it isn't because we trust the quality of the code, given the patches are to fix defects in the same vendor's existing production code.
It's because the code fails so often that it has become impossible to keep up with all the repairs required to fix it. In other words, we trust the system because it is so untrustworthy that we can no longer evaluate its trustworthiness, and have to trust the untrustworthy vendor instead.
Each layer rests on the layer below it, and cannot be effective if a lower layer fails. Conversely, if the top layer is rotten, then all they layers below are no longer relevant.
Goals
Are the goals of your vendor compatible with your own, or are they contrary to these? The adage "he who pays the piper, calls the tune" applies here; if it is not you who provide the vendor's income stream, then it's not likely to be your needs that are uppermost in the vendor's mind. Even if the vendor derives income from you, the vendor may afford to ignore your needs if you are perceived to have no choice other than to buy their product.
Intention
What is the intention of the specific thing you are evaluating? If it is intended to do something that is contrary to your interests, then at best it can be trusted only to work against your interests in the way intended.
Policy
The vendor may commit itself to policies (e.g. a privacy policy) or may be compelled to act within policies laid down by law. For example, a privacy policy may defines what the vendor would do with your data, were they to be the sole agent with access to it.
Security
This goes about limiting who has what abilities within the system. For example, a privacy policy is meaningless if entities other than the vendor also have access to data held by their system.
Safety
This goes about the level of risk (or range of possible consequences) within the system, and whether this is constrained to user's expectations. It's no use securing access so that only trusted employees can operate the system, if the system takes greater risks than the trusted employees expect when they operate it.
Sanity
This goes about whether the system acts as it was designed to do, or whether defects create opportunities for it to act completely differently. For example, a defective JPG handler can escalate the risk of handling "graphic data" to running raw code; something that bears no resemblance to what the handler was created to do.
Granularity
The above six layers are top-down, though some contexts may make more sense if Policy is considered to run above Intention. The seventh layer is different; it rides next to everything else, and each instance encompases all of the other six layers.
That's because the vendor can open the system out to additional players at every level. There may be co-owners (or successive owners, e.g. after a buy-out) with different goals; different coding teams may have different intentions, different departments or legislation may stipulate different policies, and the actual code may re-use modules developed by different vendors.
In addition to problems within each of these players, problems can arise at the interface between them. In an earlier blog post, I mentioned the rule that "users know less than you think", i.e. no matter how little you expect users to understand about your product, they will understand (or care) even less. This applies not only between end-user and product, but between each coding level and the objects re-used by those coders.
Trusted Computing
Now apply these tests to the concept of "Trusted Computing". The vendor who coined the phrase derives income from us who pay for Windows, but is the monopoly provider of the OS required to run applications written for it. The stated goals and intentions of the OS are to leverage interests of certain business partners over ours via DRM; in fact, "Trusted Computing" initially meant that media corporations could "trust" user's systems to be constrained from violating corporate interests.
So already, we have problems at the top of the trust stack, especially when we look at the track record of previous behavior. We also have a top-level granularity problem; the OS vendor empowers a class of "media providers" to leverage their rights over ours.
How is this class of "media providers" bounded? Free speech requires anyone to be accepted as a provider of content, which means we're expected to trust anyone to have rights that trump our own on our own systems. Or you could constrain these powers to a small cartel of well-resourced corporations, trading freedom of expression for putative trustworthiness of computing. Given that one of the largest media corporations has already been caught dropping rootkits onto PCs from "audio" CDs, I don't have much faith there.
If you look at the problem from the bottom up, it doesn't get better - the raw materials out of which "trusted computing" is to be built, are already so failure-prone as to require regular repairs that are limited to monthly patching for convenience.
Bottom line
We already trust computing, even though evidence proves it's unworthy of trust.
When we allow software to download and apply patches without explicitly reviewing or testing these, we break the best practice of allowing no unauthorised changes to be made to the system. Why would we give blanket authorization to any patches the vendor chooses to push?
It isn't because we trust the vendor's goals and intentions, given the vendor has already been caught pushing through user-hostile code (Genuine Windows Notification) as a "critical update".
And it isn't because we trust the quality of the code, given the patches are to fix defects in the same vendor's existing production code.
It's because the code fails so often that it has become impossible to keep up with all the repairs required to fix it. In other words, we trust the system because it is so untrustworthy that we can no longer evaluate its trustworthiness, and have to trust the untrustworthy vendor instead.
Why I Avoid Norton AV
What's the most important thing about an antivirus scanner?
That it detects 97% rather than "only" 95% of malware in a test?
Nope, not even close.
That you can keep it updated?
Closer, but that isn't it either.
No; the most important thing is that it works.
Norton Antivirus, on the other hand, is deliberately designed to not work - if it "thinks" it's being used in breach of its license conditions.
A while back, I added a new step in the process of disinfecting systems; right at the end of the Bart CDR boot phase, after doing the scans and checking integration points, I rename away all Temp and "Temporary Internet Files" locations so that any missed malware running from there will be unreachable when I boot Windows for the first time.
Over the last few weeks, I noticed several PCs would start Windows with an "Activate Norton Antivirus" nag, usually as "Your trial period has expired". Norton AV would not only not run, but would also not provide access to its quarantine or logs of previous scans.
Generally, I just shrug, uninstall it as the useless PoS it has proven to be, and replace it with a decent free scanner that works. I'm not going to phone clients to query license status, ask for product keys, etc. and as I neither sell nor recommend Norton, I wouldn't bother to troubleshoot it further unless paid clock time to do so.
However, I did do a Google( Norton Activation ) and that was verrry interesting...
http://www.extremetech.com/article2/0,1697,1395940,00.asp
http://www.extremetech.com/article2/0,1697,1396474,00.asp
http://www.eweek.com/article2/0,1895,1779931,00.asp
...as well as plenty of forum shrieks:
http://techrepublic.com.com/5208-6239-0.html?forumID=52&threadID=175000
http://www.computing.net/security/wwwboard/forum/15607.html
http://www.mcse.ms/archive182-2005-11-1890449.html
http://forums.pcworld.co.nz/archive/index.php/t-53985.html
As usual, it's doesn't fully meet the vandor's needs even as it screws the users:
http://www.theregister.co.uk/2003/09/22/norton_antivirus_product_activation_cracked/
Symantec offers the following hoops to jump through...
http://service1.symantec.com/SUPPORT/nav.nsf/docid/2003093015493306?Open&src=w
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2004122212374346?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092709273146?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092311012446?Open&src=&docid=20040324164239925&nsf=SUPPORT%5Cemeacustserv.nsf&view=eedocid&dtype=&prod=&ver=&osv=&osv_lvl=
...but why should you accept this mission? Why pay scumbags who embed commercial malware within a product ostensibly designed to help you counter malware? Tackling malware is tough enough without having to worry about whether each hidden file or hook is part of Norton's self-serving un-documented user-hostile code, or some other malware.
That it detects 97% rather than "only" 95% of malware in a test?
Nope, not even close.
That you can keep it updated?
Closer, but that isn't it either.
No; the most important thing is that it works.
Norton Antivirus, on the other hand, is deliberately designed to not work - if it "thinks" it's being used in breach of its license conditions.
A while back, I added a new step in the process of disinfecting systems; right at the end of the Bart CDR boot phase, after doing the scans and checking integration points, I rename away all Temp and "Temporary Internet Files" locations so that any missed malware running from there will be unreachable when I boot Windows for the first time.
Over the last few weeks, I noticed several PCs would start Windows with an "Activate Norton Antivirus" nag, usually as "Your trial period has expired". Norton AV would not only not run, but would also not provide access to its quarantine or logs of previous scans.
Generally, I just shrug, uninstall it as the useless PoS it has proven to be, and replace it with a decent free scanner that works. I'm not going to phone clients to query license status, ask for product keys, etc. and as I neither sell nor recommend Norton, I wouldn't bother to troubleshoot it further unless paid clock time to do so.
However, I did do a Google( Norton Activation ) and that was verrry interesting...
http://www.extremetech.com/article2/0,1697,1395940,00.asp
http://www.extremetech.com/article2/0,1697,1396474,00.asp
http://www.eweek.com/article2/0,1895,1779931,00.asp
...as well as plenty of forum shrieks:
http://techrepublic.com.com/5208-6239-0.html?forumID=52&threadID=175000
http://www.computing.net/security/wwwboard/forum/15607.html
http://www.mcse.ms/archive182-2005-11-1890449.html
http://forums.pcworld.co.nz/archive/index.php/t-53985.html
As usual, it's doesn't fully meet the vandor's needs even as it screws the users:
http://www.theregister.co.uk/2003/09/22/norton_antivirus_product_activation_cracked/
Symantec offers the following hoops to jump through...
http://service1.symantec.com/SUPPORT/nav.nsf/docid/2003093015493306?Open&src=w
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2004122212374346?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092709273146?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092311012446?Open&src=&docid=20040324164239925&nsf=SUPPORT%5Cemeacustserv.nsf&view=eedocid&dtype=&prod=&ver=&osv=&osv_lvl=
...but why should you accept this mission? Why pay scumbags who embed commercial malware within a product ostensibly designed to help you counter malware? Tackling malware is tough enough without having to worry about whether each hidden file or hook is part of Norton's self-serving un-documented user-hostile code, or some other malware.
21 July 2006
Keylogger vs. Keylogger Blocker?
I followed up a bit on this Simon Scatt entity here:
http://sunbeltblog.blogspot.com/2006/07/simon-scatt-plague-on-security-blogs.html
Check out the "comments" to that blog post; it seems that the company he's/it's punting makes both keylogging and keylogger-blocking software. I wonder which wins?
http://sunbeltblog.blogspot.com/2006/07/simon-scatt-plague-on-security-blogs.html
Check out the "comments" to that blog post; it seems that the company he's/it's punting makes both keylogging and keylogger-blocking software. I wonder which wins?
The "Windows Constitution"
I'm very happy to see this:
http://www.microsoft.com/presspass/newsroom/winxp/WindowsPrinciples.mspx
Like a constitution, it provides a yardstick by which behavior can be judged. If problems arise in the future, one could link complaints to this statement as a way of highlighting any divergance from Microsoft's stated intentions.
http://www.microsoft.com/presspass/newsroom/winxp/WindowsPrinciples.mspx
Like a constitution, it provides a yardstick by which behavior can be judged. If problems arise in the future, one could link complaints to this statement as a way of highlighting any divergance from Microsoft's stated intentions.
Users Know Less Than You Think
I like to find big meta-truths that span platforms, and here's one:
This has been fairly obvious when it comes to end users and software authors; cue horror stories of floppies stapled to letters or copied onto A4 paper, and old jokes about cup-holders and power outages.
It's less obvious, but I suspect equally true, whenever one programmer's code is used by another - either as peers co-coding a project, or one software vendor using code objects (or APIs) created by another software vendor. Those cases also involve a "user" (the coder using the API or object) and "producer" (the author of the API or object).
For example, after spending months developing an ActiveX control for use by other programmers, you may think it reasonable to expect them to read your ReadMe.txt that contains caveats such as "parameter values must be in range". But someone who is using hundreds of such re-useable code objects in a project may assume how they work without reading any of those ReadMe.txt files.
A good test of acceptable expectations is: "What if everyone did what I'm about to do?"
This is also a good bulwark against badly-behaved software. What if all installed applications:
No matter how little you expect users to understand about your product, they will understand even lessI could replace "understand" with "know" or "care", for that matter.
This has been fairly obvious when it comes to end users and software authors; cue horror stories of floppies stapled to letters or copied onto A4 paper, and old jokes about cup-holders and power outages.
It's less obvious, but I suspect equally true, whenever one programmer's code is used by another - either as peers co-coding a project, or one software vendor using code objects (or APIs) created by another software vendor. Those cases also involve a "user" (the coder using the API or object) and "producer" (the author of the API or object).
For example, after spending months developing an ActiveX control for use by other programmers, you may think it reasonable to expect them to read your ReadMe.txt that contains caveats such as "parameter values must be in range". But someone who is using hundreds of such re-useable code objects in a project may assume how they work without reading any of those ReadMe.txt files.
A good test of acceptable expectations is: "What if everyone did what I'm about to do?"
This is also a good bulwark against badly-behaved software. What if all installed applications:
- Required admin rights to run?
- Kept pestering the user to "register"?
- Added themselves to the top of the Start Menu?
- Added themselves to the startup axis to "fast start"?
- Added their own ad-hoc systems to pull down updates?
- Added their own underfootware content indexing system?
- Patched into the shell to process file content whenever files are listed?
- Smashed file associations to just one "open" action for their own application?
When "Search" Finds Trouble
Once a bit of grey chit-chat is done, this post will lightly consider some "Social Engineering" risks of HTML and search.
This blog gets updated slightly more often than my web site, which says more about the web site than this blog! Readers used to less than one post a month may wonder about my relative bloggorrhea of late; I guess it's catch-up time, and there's more to talk about. I often find I have not enough time to go through the newsgroups, but enough time to post a blog or start on a web page, and now that is what I'll do.
Often long blog silences are because I've been (far) away from keyboard, as I'm blessed with reasons to travel combined with an ongoing enjoyment of doing so. I'd love to tell you about some excellent news in Vista, but I still need to pin down what/how I can tell you and what is still NDA.
One thing I can tell you, is that there are 200+ fake anti-spyware programs out there, and one of these is likely to be what my recent dogged commenter is pushing:
The thing is, "Simon Scatt" posts exactly the same comment to every post I make, no matter what that post is about - which smells like a bot. A combination of tech skills required to bot past the OCR challenge, plus the ethical dubiousness to actually do so, bodes poorly for the safety of whatever they are trying to push at you. Just Say No, and don't click that link!
Speaking of links clicked, I got a fright the last time I fired up this blog at http://quirke.blogspot.com to edit it. I thought "uh-oh, it's finally happened..." until I realised the link I'd entered should have been http://cquirke.blogspot.com
HTML being what it is, I could quite easily show you http://cquirke.blogspot.com as a link, which is reason enough to consider HTML unfit for use as a generic "rich text" medium between arbitrary (untrusted) entities. Retro-fitting anti-phishing logic to web browsers is an appropriate way to run after the horse after it's bolted from the stables, because web browsers have to live and breathe HTML. But a horse has no place in the living-room, and using HTML throughout the system as generic "rich text" (e.g. for email message "text" and elsewhere) has exactly that effect.
A bigger risk is that folks rarely type explicit URLs anymore; they either re-use links like the ones above, or they increasingly search rather than link. I wanted to link my text "200+ fake anti-spyware programs" to the CastleCops article that raised this issue, but as I didn't keep the link, I tried to search for it instead. I found something else I used that is a bit more topical, but the same search results could just as easily lead me to click something that bites.
Microsoft's been in love with search since MS Office started pushing Find Fast. A search for "Find Fast" is revealing; first comes an unrelated bit of foistware, then comes a flood of "how do I get rid if this thing?" links, starting with one from Microsoft themselves. Yet with each new version of MS Office, Find Fast has been more difficult to get rid of, and XP has the same thing built into the OS. Now that "Google envy" is kicking in, search is likely to pervade Vista's UI.
I do see some logic in this, in that the newest computers may better carry the overhead of search indexing, and Microsoft has leveraged deep new OS features (i.e. beyond the efficiencies of NTFS) in Vista to minimize this impact. We may well find that once we use it, the expected adverse impact isn't as bad as we'd expect and we may choose to live with it.
But performance impact is only one objection to dumbing down computer use from folder navigation to guessing at names or content. More worrying are the safety implications - an opportunity is created for incoming files to do what that top link in the "Find Fast" search does; thrust something inappropriate (and probably dangerous) into your face instead of what you wanted or expected.
This blog gets updated slightly more often than my web site, which says more about the web site than this blog! Readers used to less than one post a month may wonder about my relative bloggorrhea of late; I guess it's catch-up time, and there's more to talk about. I often find I have not enough time to go through the newsgroups, but enough time to post a blog or start on a web page, and now that is what I'll do.
Often long blog silences are because I've been (far) away from keyboard, as I'm blessed with reasons to travel combined with an ongoing enjoyment of doing so. I'd love to tell you about some excellent news in Vista, but I still need to pin down what/how I can tell you and what is still NDA.
One thing I can tell you, is that there are 200+ fake anti-spyware programs out there, and one of these is likely to be what my recent dogged commenter is pushing:
Simon Scatt said...
Many programms include spyware modules. Use anti-spyware for protect your privacy. As for me, I like professional anti-spy software like PrivacyKeyboard by Raytown Corporation LLC. You can download it here (URL snipped)
The thing is, "Simon Scatt" posts exactly the same comment to every post I make, no matter what that post is about - which smells like a bot. A combination of tech skills required to bot past the OCR challenge, plus the ethical dubiousness to actually do so, bodes poorly for the safety of whatever they are trying to push at you. Just Say No, and don't click that link!
Speaking of links clicked, I got a fright the last time I fired up this blog at http://quirke.blogspot.com to edit it. I thought "uh-oh, it's finally happened..." until I realised the link I'd entered should have been http://cquirke.blogspot.com
HTML being what it is, I could quite easily show you http://cquirke.blogspot.com as a link, which is reason enough to consider HTML unfit for use as a generic "rich text" medium between arbitrary (untrusted) entities. Retro-fitting anti-phishing logic to web browsers is an appropriate way to run after the horse after it's bolted from the stables, because web browsers have to live and breathe HTML. But a horse has no place in the living-room, and using HTML throughout the system as generic "rich text" (e.g. for email message "text" and elsewhere) has exactly that effect.
A bigger risk is that folks rarely type explicit URLs anymore; they either re-use links like the ones above, or they increasingly search rather than link. I wanted to link my text "200+ fake anti-spyware programs" to the CastleCops article that raised this issue, but as I didn't keep the link, I tried to search for it instead. I found something else I used that is a bit more topical, but the same search results could just as easily lead me to click something that bites.
Microsoft's been in love with search since MS Office started pushing Find Fast. A search for "Find Fast" is revealing; first comes an unrelated bit of foistware, then comes a flood of "how do I get rid if this thing?" links, starting with one from Microsoft themselves. Yet with each new version of MS Office, Find Fast has been more difficult to get rid of, and XP has the same thing built into the OS. Now that "Google envy" is kicking in, search is likely to pervade Vista's UI.
I do see some logic in this, in that the newest computers may better carry the overhead of search indexing, and Microsoft has leveraged deep new OS features (i.e. beyond the efficiencies of NTFS) in Vista to minimize this impact. We may well find that once we use it, the expected adverse impact isn't as bad as we'd expect and we may choose to live with it.
But performance impact is only one objection to dumbing down computer use from folder navigation to guessing at names or content. More worrying are the safety implications - an opportunity is created for incoming files to do what that top link in the "Find Fast" search does; thrust something inappropriate (and probably dangerous) into your face instead of what you wanted or expected.
20 July 2006
Security End-Users Can Trust
Here, I'm referring purely to the mechanics of how a user can believe what is on the screen, have faith that passwords can't be cracked, and so on. They say that "justice must not only be done, but must be seen to be done"; by the same token, security must be seen to done, or we are asking users to place blind faith in the good will and competence of those who the user is obliged to trust.
The core problem is that humans are weaker than computers when it comes to the amount of pure and arbitrary data they can perceive and remember.
Display
No matter how tightly-coded the security validation logic, and how strong the key strength, what the user will eventually see (and typically, pay cursory attention to) will be a bunch of pixels on the screen. Anything that can fake those pixels, will get trusted.
We know that to prevent an attacker brute-forcing or forging something, there has to be a minimum amount of information present. We like passwords to be randomized across a minimum number of bits, and we provide hard-to-copy cues in forgery-resistent material. So we have foil strips and watermarks in bank notes, hard-to-manufacture copper-on-aluminium software installation CD-ROMs, and so on.
When it comes to displaying something in a forgery-resistent manner, we are restricted to pixels that have 16 million possible color values, of which users may distinguish 100 or so at best. The entire screen area may be as low as 640 x 480 pixels. Anything can set any pixel to any color, so there's no way to prevent forgery.
Even if they were, humans cannot perceive and appreciate arbitrary pixel patterns. The brain will derive patterns from the raw data and the mind will evaluate these patterns. The raw data itself will not be fully "seen"; only a limited number of derived patterns.
Input
A large number of bits is the best-case strength for a key, applicable only if possible values are randomized over the key space, and if "cribs" (encrypted information for which the plain-text can be guessd) are not available. WEP failed both of these criteria; key strength was devalued by OEMs who left several bits in the key to known default values, and WEP traffic included a lot of stereotypical packets that provide "cribs".
Humans usually don't remember raw data; instead, they remember algorithms that can create this data. This skews values within the key space from a truly random spread, to preferred values that match the way humans think, and thus weakens the key strength.
So if it's easy to remember, it's easy to guess. If it's not easy to remember, then the user will write it down (or worse, enter it into a file on the system) and your strong password system becomes a weak and unmanaged token system. If you're going to use a token system anyway, then it's better to do this properly (e.g. biometrics, USB fobs, etc.).
User-managed passwords may be acceptable if you just want the semblance of due dilligance. You can point to your password policy, shrug about bad workers who break the policy, and seek a scapegoat whenever things go wrong. But once something that is essential to make things work is also disallowed, you lose management control, and examples of that abound.
Multiple Targets
Most assessments of key strength against brute-force or weighted-guess attacks assume that only one particular system is being targeted. The odds change considerably if you don't care what target you penetrate, and have millions of targets to choose from.
Instead of having to back off after 10 attempts due to some sort of password failure lock-out, you can simply make 9 attempts on a few thousand systems every hour or so. Eventually, you'll break into something, somewhere, and all stolen money is equally good.
Obviously, consumer ecommerce on the Internet presents this opportunity for a one-to-many relationship between attacker and victim. Slightly less obviously, it also facilitates a many-to-many relationship (the bane of database design) when the attacker can use multiple arbitrary malware-infected PCs as zombies from which to launch the attacks.
The core problem is that humans are weaker than computers when it comes to the amount of pure and arbitrary data they can perceive and remember.
Display
No matter how tightly-coded the security validation logic, and how strong the key strength, what the user will eventually see (and typically, pay cursory attention to) will be a bunch of pixels on the screen. Anything that can fake those pixels, will get trusted.
We know that to prevent an attacker brute-forcing or forging something, there has to be a minimum amount of information present. We like passwords to be randomized across a minimum number of bits, and we provide hard-to-copy cues in forgery-resistent material. So we have foil strips and watermarks in bank notes, hard-to-manufacture copper-on-aluminium software installation CD-ROMs, and so on.
When it comes to displaying something in a forgery-resistent manner, we are restricted to pixels that have 16 million possible color values, of which users may distinguish 100 or so at best. The entire screen area may be as low as 640 x 480 pixels. Anything can set any pixel to any color, so there's no way to prevent forgery.
Even if they were, humans cannot perceive and appreciate arbitrary pixel patterns. The brain will derive patterns from the raw data and the mind will evaluate these patterns. The raw data itself will not be fully "seen"; only a limited number of derived patterns.
Input
A large number of bits is the best-case strength for a key, applicable only if possible values are randomized over the key space, and if "cribs" (encrypted information for which the plain-text can be guessd) are not available. WEP failed both of these criteria; key strength was devalued by OEMs who left several bits in the key to known default values, and WEP traffic included a lot of stereotypical packets that provide "cribs".
Humans usually don't remember raw data; instead, they remember algorithms that can create this data. This skews values within the key space from a truly random spread, to preferred values that match the way humans think, and thus weakens the key strength.
So if it's easy to remember, it's easy to guess. If it's not easy to remember, then the user will write it down (or worse, enter it into a file on the system) and your strong password system becomes a weak and unmanaged token system. If you're going to use a token system anyway, then it's better to do this properly (e.g. biometrics, USB fobs, etc.).
User-managed passwords may be acceptable if you just want the semblance of due dilligance. You can point to your password policy, shrug about bad workers who break the policy, and seek a scapegoat whenever things go wrong. But once something that is essential to make things work is also disallowed, you lose management control, and examples of that abound.
Multiple Targets
Most assessments of key strength against brute-force or weighted-guess attacks assume that only one particular system is being targeted. The odds change considerably if you don't care what target you penetrate, and have millions of targets to choose from.
Instead of having to back off after 10 attempts due to some sort of password failure lock-out, you can simply make 9 attempts on a few thousand systems every hour or so. Eventually, you'll break into something, somewhere, and all stolen money is equally good.
Obviously, consumer ecommerce on the Internet presents this opportunity for a one-to-many relationship between attacker and victim. Slightly less obviously, it also facilitates a many-to-many relationship (the bane of database design) when the attacker can use multiple arbitrary malware-infected PCs as zombies from which to launch the attacks.
Subscribe to:
Posts (Atom)