Way back in 2003, South African bank ABSA were in the news after customers had lost money through hacking. Here's a report from 21 July 2003 and another one with more detail. The story was that some uber-hackers robbed ABSA, were caught, and now Internet banking is safe again.
However, check out the detail on Bugbear B from June 2003; an in-the-wild malware that was noted to steal information from a number of banking domains in several different countries, including South Africa. Was there one uber-hacker attacking ABSA, or multiple tiny hacks by folks who figured out how to make use of Bugbear B?
The South African banking industry responded to the ABSA debacle by boasting new improvements in security, implying that what happened to ABSA would never happen at their bank. These improvements included on-screen mouse-driven number pad to avoid keylogging, and free (but UI-less and thus uncontrollable) MyCIO antivirus and firewall from McAfee.
At this point, the article you are reading is going to jump around seemingly-unrelated topics. Have faith; it will all come together at the end...
Microsoft Java
Sun sued Microsoft over the MS Java VM that was included in Windows and Internet Explorer, as Microsoft's Windows-specific extensions broke the "write once, run anywhere" goal of cross-platform usability. Sun contended that developers attracted to MS Java would be locked into Windows by these extensions.
Recently, I cleaned up an XP SP2 system that included Java malware, and which was running the old MS Java VM. I found instructions on removing MS Java, and the steps looked like those that should be done automatically by an uninstaller - if Microsoft had followed their own advice to developers and provided one for MS Java.
Not only did Microsoft provide no Add/Remove entry for the MS Java VM, but running one of the manual steps to remove it popped up a dialog box with the odd warning that "Internet Explorer will no longer be able to download from the World Wide Web". Now I can understand Java applets not working or pages being unable to display as the site intended, but not being able to do standard downloads? Smells like a smoking gun to me...
Sun Java
By now, most users of Java will be using Sun's Java Runtime Engine (JRE) instead of Microsoft's Java Virtual Machine (VM). We've also become accustomed to the need to fix code defects by updating subsystems such as Java, applying code patches, and so on.
A long-standing bone of contention with Sun has been that when you install a new JRE, the old one remains in place - and we suspected this old and vulnerable code could be used and thus exploited by java malware. We bitched about this all the way from 1.4.xx through 1.5.xx, and yet Sun just carried on installing new JREs while leaving old ones (at 100-150M apiece) in place.
It seemed that unlike Microsoft, Sun just didn't "get" what patching was all about. They seemed to think we downloaded and installed new JREs because we wanted kewl new features, and kept the old ones around for backward compatibility - whereas what we really want to do is smash this "backward compatibility" so that malware could not exploit flaws in the old versions.
Finally, Sun came clean and admitted what we'd always suspected; that a Java applet could specify which version of JRE it would like to be interpreted by, and the current version would obligingly hand off to the applet's JRE of choice.
Java malware
The first known Java virus was written in 1998, and detected as StrangeBrew. Since then, Java has been attacked and exploited in various ways, and both Microsoft and old Sun Java JREs are considered to be hi-risk exploitable surfaces. By now, Java malware abounds, and indeed there was such malware on the system I recently cleaned up. The beat goes on.
Note the dates involved in some of the above links, e.g. Sun JRE 1.4.2.xx was found to be exploitable way back in 2004 (the "Sun" link above) - as well as the versions that are vulnerable, such as 1.4.2_04.
Internet Banking in 2006
After cleaning up the system, I uninstalled MS Java VM, checked that no old Sun JREs were present, and installed Sun JRE 1.5.008 as the only Java engine on the system. After a while I had a call to say that Internet Banking wasn't working anymore.
Indeed, it wasn't working, so I called the bank's tech support, explained the system's history and why the MS Java VM had been removed, and they gave me a link to download a fix. The fix turned out to install the MS Java VM again, which I disallowed.
I called back to ask about an update that would work with current Sun Java, and they said yes, the newest version of the software no longer needs MS Java. I was a bit puzzled to hear it took them this long to switch, given that MS Java was pulled from XP in the days of SP1a, and SP1 is now so old that it's about to lose all further testing and patching, with SP2 as the new baseline.
So we rushed off to the city to collect an installation CD for their newest software, as it is not available as a download. This also did not work, and after another tech call, it turns out that this newest software does not support any Sun Java JRE beyond 1.5.005, so I was advised to fall back to that from the 1.5.008 that I was using.
I noticed that the new banking software installed Sun JRE 1.4.2_03, which is ancient and has been vulnerable to attack since 2004 at least. I uninstalled that old JRE when the banking software had finished installing, and after shutting down and restarting Windows, I tried the new banking software, which again failed to work.
After a bit of technical discussion, it turns out that the new banking software's real JRE threshold is in fact 1.4.2_03, and the only reason it "works" up to 1.5.005 is because it relies on these newer JREs to pass control back to 1.4.2_03.
This is really quite nasty, because users will think they are protected against Java exploits because they installed the latest JRE, while in fact the banking software is undermining this safety by slipstreaming in an old exploitable JRE. It makes a mockery of banking's usual assertion that they do their best to maintain security, but are let down by users who fail to keep their PCs safe and clean. There's something odd in being forced to accept an exploitability risk in order to use security-orientated software.
I haven't named the bank in question (it's not ABSA this time), because they are the only bank I've had reason to check out. For all I know, most or all of our local banks may be just as negligent, so it would be unfair to single out this one just because I found out about them first!
25 September 2006
15 September 2006
How To Design a mOS
A maintenance OS (mOS) is one that you can use when you daren't trust your system to boot into the OS that is installed on it. Through the DOS and Win9x years, we were used to diskette-booted DOS in this role - but NTFS, > 137G hard drives, USB etc. make this less useful in XP.
As at September 2006, Microsoft provide no mOS for modern Windows, but you can build one for yourself by using Bart PE Builder (and perhaps you should!). Out of the box, Bart CDR meets the criteria for a safe mOS, but you can botch this when "enhancing" it.
There are all sorts of jobs one can do from a mOS, but mainly, it's:
Re-establishing safe functioning
Running a PC assumes various levels of functionality work perfectly. When a PC "doesn't work", one has to re-establish each of these in turn, before one can stand on each to reach the next. At each stage, one has to not use what cannot yet be trusted.
Is it safe to plug into the mains?
PCs with metallic rattles when shaken, may not be - a loose metal object could short out circuitry and burn it out. It's best to check inside the case for loose objects; salty wet dust; metal objects, flakes or rinds; power connecters danging onto pins on circuit boards,
and also that the power supply is set to the correct mains voltage, and that rain didn't fall into the case and power supply while the PC was being carried in.
Is the hardware logic safe?
This mainly goes about RAM, but implicit in a 12-hour RAM test is a test to see whether the PC can stay running that long, or will spontaneously reset or hang. The ideal RAM checker would also display processor and motherboard temperatures, and possibly operating voltages, best served with latched lowest and highest detected values.
Is the hard drive safe to use?
That goes about the physical condition of the hard drive, and is tested retrospectively by looking at the S.M.A.R.T. details, and also by test-reading every sector on the drive. It's important not to beat the drive to death; ideally, the surface test should avoid getting stuck in retry loops when a failing sector is encountered, and should abort when the first bad sector is found. The testing process should not attempt to "fix" anything!
Is the hard drive safe to write to?
Certain contexts (e.g. requests to recover deleted data) define the hard drive as being unsafe to write to, because material outside the file system's mapped space is not protected from overwrites. Otherwise, the drive may be considered safe for writes if the file system contains no physical errors, plus the hardware and physical hard drive must pass their tests.
Is the hard drive installation safe to run?
In addition to all of the above, this requires the presence of active malware to be excluded - and in practice, this may form the bulk of your mOS use. There are many challenges here, given that even a combination of anti-malware scanners is likely to miss some things that you'd have to look for and manage by hand.
Is it safe to network?
This goes about what's on the rest of the network (i.e. are all other computers on the LAN clean, and is WiFi allowing arbitrary computers to join this network?) and whether your system is adequately separated (NAT, firewall, patching of network edge code) from the 'net. The latter question has to be asked twice; for the mOS (if you are networking from it) and for the hard drive installation when this is finally booted again.
Boot safety
Many boot CDs are not safe, because they will automatically chain into booting the hard drive unless a key is pressed within a short time-out period. This is particularly dangerous, given that the chaining process ignores CMOS settings that would otherwise define what hard drives are visible, what device should boot next, or whether the hard drive should boot at all.
Every bootable Windows installation disk from Microsoft fails this test. Standard Bart PE is safe here, but has a plugin setting that can select the same automatic chaining to hard drive behavior. The Bart-based Avast! antivirus scanning CD enables this, and thus fails the test, as may other Bart-based boot disk projects.
Many mOS tasks take a lot of unattended clock time to run, starting with RAM testing, then hard drive surface testing, then virus scanning or searches for data to recover. If anything should cause the system to reset (remember, this is a sick PC being maintained) then it will fall through to boot the hard drive, thus running ?infected code in ?bad RAM that writes to an at-risk hard drive and file system. Disaster!
Even if you have tested RAM, hard drive etc. and now consider the hardware to be trustworthy, an unexpected reset will usually dispell that trust. The only safe thing for a mOS boot disk to do under such circumstances, is to stop and wait for a keypress (with no time-out fall-through).
It's tempting to have a mOS disk boot straight into a RAM check, as that's generally what one should do after unexpected lockups or resets, but that can make it easy to miss spontaneous resets during an overnight RAM test. You'd wake up, see the test still running and no errors found, but for all you know it may have reset and restarted the test a dozen times.
Testing RAM
At the time one tests RAM and perhaps core motherboard and processor logic, one can assume nothing to be safe. So the mOS and the programs you run from it should not write to the hard drive, or even read it (as a bad-RAM bit-flip can change a "read disk" to a "write disk").
I haven't figured out how to integrate RAM testers such as MemTest86, MemTest86+ , SIMMTester etc. into the same CDR as Bart, so I use a separate CDR for this. I then remove the CDR after it's booted and swap it for another that will boot but not access hard drive, such as a different RAM tester or a DOS boot CDR.
I'd love a RAM tester that showed system temperatures, but I haven't seen one that does.
Hardware compatibility
One would prefer a mOS that works on any hardware without having to have "special" drivers added to it, and Bart generally passes this test, unless oddball add-on hard drive cards or RAID are in use. Even S-ATA hard drives on the current i945 chipsets will work from Bart.
Bart will detect USB storage devices at boot time, but won't detect changes to these thereafter. So you'd have to insert a USB stick before boot, and not pull it out, swap it, add others, or add the same one back after changing the contents elsewhere. However, Bart treats card reader devices as containing removable "disks", so you can add and swap SD cards etc. quite happily. For this and other reasons, I generally use SD cards instead of USB sticks.
You cannot remove the Bart disk during a Bart session, and that means no burning to CDRs from most PCs.
Memory management
A mOS has to take no risks that are not initiated by the user, and on a sick PC, everything is a risk until testing and management re-establishes it as safe.
So a mOS should not make assumptions about the hard drive contents; automatically access, "grope" material or run code from the hard drive, or commence networking. That also means not using the hard drive for swapping to virtual memory or temp file workspace - and that makes memory management a challenge, especially when some of the available RAM is already used as a RAM drive.
A standard Bart CDR will create a small RAM drive and locate Temp files there, and will prompt before commencing networking. I've modified mine to leave networking inactive, and added on-demand facilities to change RAM drive size, relocate Temp location, create a page file on a selected hard drive volume, and start networking if required.
My usual SOP is then to divert Temp to a newly-created location on the hard drive, once I've tested the physical hard drive and logical file system. If RAM is low, I shrink the RAM disk and create a page file on the hard drive, before starting programs that will need Temp workspace (e.g. anti-malware scanners that extract archives to scan the contents).
Testing hard drive
The usual advice is to use hard drive vendors' tools, or ChkDsk /R. Neither are really acceptable, but for different reasons.
Hard drive vendor tools tend to display a summary S.M.A.R.T. report, which can be "OK" even when S.M.A.R.T. detail shows multiple failed sectors have been detected and "fixed". The surface scan may be useful, as long as it doesn't "fix" anything. Then there may be "deeper" tests that are data-destructive, such as "write zeros to disk" or a pseudo-"low level format".
ChkDsk /R is unacceptable because it's orientated to "fixing" things without prompting you for permission. First it tests the file system logic and "fixes" it, so that when it tests the surface of the disk, it can "fix" bad clusters by re-writing the contentrs elsewhere in the file system. All of which is unacceptably destructive if you'd rather have recovered data first.
Instead of these, I use HD Tune for Windows, which will run from Bart CDR just fine. It ignores the contents of the hard drive entirely, reports S.M.A.R.T. detail that is updated in real time even during other tests, can test hard drives over USB and memory cards (neither will show S.M.A.R.T.), and displays the hard drive's operating temperature (again, updated in real time) no matter which test is currently in progress.
Testing file system and data recovery
I haven't any good tools for NTFS, alas, so I use ChkDsk without any parameters that would cause it to "fix" anything. If the file system is FATxx and hard drive is < 137G, I prefer to use DOS mode Scandisk, as that allows interactive repair, and DiskEdit for when I'd rather do such repairs manually.
If data is to be recovered, I have a few semi-automatic tools in my Bart that are sometimes effective - but before using them, I prefer to copy off files and do a BING image backup of any NT-family partition that is to remain bootable.
I usually keep core user data on a 2G FAT16 volume, so if that requires data recovery, it's small enough to peel off as raw CDR-sized slabs using DiskEdit. I can then reformat the stricken data volume and get the PC back into the field, while I operate on the volume as pasted onto a different and working hard drive. FAT16's large data clusters mean any files that can fit in a single cluster, can be recovered intact even if the FATs are trashed.
Malware management
A mOS will often have to work on infected systems, so it must never run code from them unless the user explicitly initiates this. That requirement goes beyond not booting from the hard drive, to not including the hard drive in the Path, and not handling material on the hard drive in a "rich" enough way to expose exploitable surfaces.
A mOS should not "grope" the hard drive for other reasons, e.g. in case some of the material includes bad sectors that would bog the mOS down in retry loops, or cause it to crash on deranged file system logic. When your file manager of choice lists files, you want no cratching in file content for icons or metatdata.
Standard Bart is safe in this regard. There's no "desktop" in the hard drive file system sense, and the file managers that are included do not grope metadata when they "list" files. However, many Bart projects use XPE or similar to improve the UI by using Explorer.exe as the shell; I prefer not to do this, because doing so may expose exploitable surfaces.
A mOS should perform no automatic disk access - thus no indexing service, no System Restore, no resident antivirus and no thumbnailling.
Many malware scanners and integration checkers require registry access, and that is complicated when you have booted from a different OS installation. If simply used as-is, these tools would report results based on the Bart CDR's registry, not the one on the hard drive.
The solution for Bart is the RunScanner plugin. This redirects registry access to the hard drive installation for the tool that is run through it, but not child processes that this tool may launch. There are parameters to specify which hives to use, and to delay the swich from Bart to hard drive hives so that the tool can initialize itself according to the former before use on the latter.
Any tests that rely on run-time behavior (such as LSPFix, some driver and service managers, and most rootkit scanners) will not return meaningful results during a mOS session (unless you wish to test the behavior of the mOS). In particular, drivers and services may list a a mixture of "live" and registry-derived results, thus blending these from the mOS and hard drive. Interpret such results with care.
Any changes you make from mOS will not be monitored by the hard drive installation. This is generally desirable, as it prevents malware intervention, or Windows itself updating registry references so that malware may remain integrated. But it also means no System Restore undoability, and the quarantine material from various scanners may be lost, and/or not work when attempts are made to restore these later.
For this reason, I usually scan to kill when dealing with intrafile code infectors and other hard-core malware, but scan to detect only, when it comes to commercial malware that I expect to pose more problems due to botched removal than malicious persistence. I defer clean-up of those to a later Safe Mode Cmd Only boot, so that undoability is maintained.
When it comes to rootkits, these are exposed to normal scanning just like any other inert file. Tools that aim to detect rootkit behavior will not have any such behavior to detect, unless the mOS has triggered the malware into action. It can also help to save integration checks (such as HiJackThis or Nirsoft utility logs) as redirected by RunScanner and compare these with logs saved from Safe Mode or normal Windows. Unexplained differences may suggest rootkit activity during your "Safe" or normal Windows sessions, unless the mOS tests were done based on the mOS's registry rather than the hard drive's hives.
Beyond the mOS session
A mOS disk can be useful even when not being used as a mOS. For example, it can Autorun to provide tools for use from Windows, be used as storage space for updates and installables, and can operate as a diskette builder for tasks the mOS cannot do from itself.
As an example of the last, my own Bart CDR can spawn bootable diskettes for BING, RAM testers, and various DOS boot disks containing various tools. The DOS boot diskettes can then access the Bart CDR and thus extend the range of available tools via an appropriate Path.
I also set up my Bart so that I can test the menu UI against the output build, even before it is committed to disk, and the installation of some tools can double up to be run from both the host system and from Bart CDRs built from it. This is accomplished mainly by careful use of base-relative paths within the nu2menu (the native shell for standard Bart) and batch file logic.
I've found nu2menu to be useful in its own right, and use a stand-alone menu to manage the entire Bart-building process - updating the scanners, selecting wallpaper and UI button graphics, editing and testing the nu2menus, accessing Bart forums and plugin documentation, and building the CDRs themselves.
As at September 2006, Microsoft provide no mOS for modern Windows, but you can build one for yourself by using Bart PE Builder (and perhaps you should!). Out of the box, Bart CDR meets the criteria for a safe mOS, but you can botch this when "enhancing" it.
There are all sorts of jobs one can do from a mOS, but mainly, it's:
- Diagnostics
- Data recovery
- Malware management
Re-establishing safe functioning
Running a PC assumes various levels of functionality work perfectly. When a PC "doesn't work", one has to re-establish each of these in turn, before one can stand on each to reach the next. At each stage, one has to not use what cannot yet be trusted.
Is it safe to plug into the mains?
PCs with metallic rattles when shaken, may not be - a loose metal object could short out circuitry and burn it out. It's best to check inside the case for loose objects; salty wet dust; metal objects, flakes or rinds; power connecters danging onto pins on circuit boards,
and also that the power supply is set to the correct mains voltage, and that rain didn't fall into the case and power supply while the PC was being carried in.
Is the hardware logic safe?
This mainly goes about RAM, but implicit in a 12-hour RAM test is a test to see whether the PC can stay running that long, or will spontaneously reset or hang. The ideal RAM checker would also display processor and motherboard temperatures, and possibly operating voltages, best served with latched lowest and highest detected values.
Is the hard drive safe to use?
That goes about the physical condition of the hard drive, and is tested retrospectively by looking at the S.M.A.R.T. details, and also by test-reading every sector on the drive. It's important not to beat the drive to death; ideally, the surface test should avoid getting stuck in retry loops when a failing sector is encountered, and should abort when the first bad sector is found. The testing process should not attempt to "fix" anything!
Is the hard drive safe to write to?
Certain contexts (e.g. requests to recover deleted data) define the hard drive as being unsafe to write to, because material outside the file system's mapped space is not protected from overwrites. Otherwise, the drive may be considered safe for writes if the file system contains no physical errors, plus the hardware and physical hard drive must pass their tests.
Is the hard drive installation safe to run?
In addition to all of the above, this requires the presence of active malware to be excluded - and in practice, this may form the bulk of your mOS use. There are many challenges here, given that even a combination of anti-malware scanners is likely to miss some things that you'd have to look for and manage by hand.
Is it safe to network?
This goes about what's on the rest of the network (i.e. are all other computers on the LAN clean, and is WiFi allowing arbitrary computers to join this network?) and whether your system is adequately separated (NAT, firewall, patching of network edge code) from the 'net. The latter question has to be asked twice; for the mOS (if you are networking from it) and for the hard drive installation when this is finally booted again.
Boot safety
Many boot CDs are not safe, because they will automatically chain into booting the hard drive unless a key is pressed within a short time-out period. This is particularly dangerous, given that the chaining process ignores CMOS settings that would otherwise define what hard drives are visible, what device should boot next, or whether the hard drive should boot at all.
Every bootable Windows installation disk from Microsoft fails this test. Standard Bart PE is safe here, but has a plugin setting that can select the same automatic chaining to hard drive behavior. The Bart-based Avast! antivirus scanning CD enables this, and thus fails the test, as may other Bart-based boot disk projects.
Many mOS tasks take a lot of unattended clock time to run, starting with RAM testing, then hard drive surface testing, then virus scanning or searches for data to recover. If anything should cause the system to reset (remember, this is a sick PC being maintained) then it will fall through to boot the hard drive, thus running ?infected code in ?bad RAM that writes to an at-risk hard drive and file system. Disaster!
Even if you have tested RAM, hard drive etc. and now consider the hardware to be trustworthy, an unexpected reset will usually dispell that trust. The only safe thing for a mOS boot disk to do under such circumstances, is to stop and wait for a keypress (with no time-out fall-through).
It's tempting to have a mOS disk boot straight into a RAM check, as that's generally what one should do after unexpected lockups or resets, but that can make it easy to miss spontaneous resets during an overnight RAM test. You'd wake up, see the test still running and no errors found, but for all you know it may have reset and restarted the test a dozen times.
Testing RAM
At the time one tests RAM and perhaps core motherboard and processor logic, one can assume nothing to be safe. So the mOS and the programs you run from it should not write to the hard drive, or even read it (as a bad-RAM bit-flip can change a "read disk" to a "write disk").
I haven't figured out how to integrate RAM testers such as MemTest86, MemTest86+ , SIMMTester etc. into the same CDR as Bart, so I use a separate CDR for this. I then remove the CDR after it's booted and swap it for another that will boot but not access hard drive, such as a different RAM tester or a DOS boot CDR.
I'd love a RAM tester that showed system temperatures, but I haven't seen one that does.
Hardware compatibility
One would prefer a mOS that works on any hardware without having to have "special" drivers added to it, and Bart generally passes this test, unless oddball add-on hard drive cards or RAID are in use. Even S-ATA hard drives on the current i945 chipsets will work from Bart.
Bart will detect USB storage devices at boot time, but won't detect changes to these thereafter. So you'd have to insert a USB stick before boot, and not pull it out, swap it, add others, or add the same one back after changing the contents elsewhere. However, Bart treats card reader devices as containing removable "disks", so you can add and swap SD cards etc. quite happily. For this and other reasons, I generally use SD cards instead of USB sticks.
You cannot remove the Bart disk during a Bart session, and that means no burning to CDRs from most PCs.
Memory management
A mOS has to take no risks that are not initiated by the user, and on a sick PC, everything is a risk until testing and management re-establishes it as safe.
So a mOS should not make assumptions about the hard drive contents; automatically access, "grope" material or run code from the hard drive, or commence networking. That also means not using the hard drive for swapping to virtual memory or temp file workspace - and that makes memory management a challenge, especially when some of the available RAM is already used as a RAM drive.
A standard Bart CDR will create a small RAM drive and locate Temp files there, and will prompt before commencing networking. I've modified mine to leave networking inactive, and added on-demand facilities to change RAM drive size, relocate Temp location, create a page file on a selected hard drive volume, and start networking if required.
My usual SOP is then to divert Temp to a newly-created location on the hard drive, once I've tested the physical hard drive and logical file system. If RAM is low, I shrink the RAM disk and create a page file on the hard drive, before starting programs that will need Temp workspace (e.g. anti-malware scanners that extract archives to scan the contents).
Testing hard drive
The usual advice is to use hard drive vendors' tools, or ChkDsk /R. Neither are really acceptable, but for different reasons.
Hard drive vendor tools tend to display a summary S.M.A.R.T. report, which can be "OK" even when S.M.A.R.T. detail shows multiple failed sectors have been detected and "fixed". The surface scan may be useful, as long as it doesn't "fix" anything. Then there may be "deeper" tests that are data-destructive, such as "write zeros to disk" or a pseudo-"low level format".
ChkDsk /R is unacceptable because it's orientated to "fixing" things without prompting you for permission. First it tests the file system logic and "fixes" it, so that when it tests the surface of the disk, it can "fix" bad clusters by re-writing the contentrs elsewhere in the file system. All of which is unacceptably destructive if you'd rather have recovered data first.
Instead of these, I use HD Tune for Windows, which will run from Bart CDR just fine. It ignores the contents of the hard drive entirely, reports S.M.A.R.T. detail that is updated in real time even during other tests, can test hard drives over USB and memory cards (neither will show S.M.A.R.T.), and displays the hard drive's operating temperature (again, updated in real time) no matter which test is currently in progress.
Testing file system and data recovery
I haven't any good tools for NTFS, alas, so I use ChkDsk without any parameters that would cause it to "fix" anything. If the file system is FATxx and hard drive is < 137G, I prefer to use DOS mode Scandisk, as that allows interactive repair, and DiskEdit for when I'd rather do such repairs manually.
If data is to be recovered, I have a few semi-automatic tools in my Bart that are sometimes effective - but before using them, I prefer to copy off files and do a BING image backup of any NT-family partition that is to remain bootable.
I usually keep core user data on a 2G FAT16 volume, so if that requires data recovery, it's small enough to peel off as raw CDR-sized slabs using DiskEdit. I can then reformat the stricken data volume and get the PC back into the field, while I operate on the volume as pasted onto a different and working hard drive. FAT16's large data clusters mean any files that can fit in a single cluster, can be recovered intact even if the FATs are trashed.
Malware management
A mOS will often have to work on infected systems, so it must never run code from them unless the user explicitly initiates this. That requirement goes beyond not booting from the hard drive, to not including the hard drive in the Path, and not handling material on the hard drive in a "rich" enough way to expose exploitable surfaces.
A mOS should not "grope" the hard drive for other reasons, e.g. in case some of the material includes bad sectors that would bog the mOS down in retry loops, or cause it to crash on deranged file system logic. When your file manager of choice lists files, you want no cratching in file content for icons or metatdata.
Standard Bart is safe in this regard. There's no "desktop" in the hard drive file system sense, and the file managers that are included do not grope metadata when they "list" files. However, many Bart projects use XPE or similar to improve the UI by using Explorer.exe as the shell; I prefer not to do this, because doing so may expose exploitable surfaces.
A mOS should perform no automatic disk access - thus no indexing service, no System Restore, no resident antivirus and no thumbnailling.
Many malware scanners and integration checkers require registry access, and that is complicated when you have booted from a different OS installation. If simply used as-is, these tools would report results based on the Bart CDR's registry, not the one on the hard drive.
The solution for Bart is the RunScanner plugin. This redirects registry access to the hard drive installation for the tool that is run through it, but not child processes that this tool may launch. There are parameters to specify which hives to use, and to delay the swich from Bart to hard drive hives so that the tool can initialize itself according to the former before use on the latter.
Any tests that rely on run-time behavior (such as LSPFix, some driver and service managers, and most rootkit scanners) will not return meaningful results during a mOS session (unless you wish to test the behavior of the mOS). In particular, drivers and services may list a a mixture of "live" and registry-derived results, thus blending these from the mOS and hard drive. Interpret such results with care.
Any changes you make from mOS will not be monitored by the hard drive installation. This is generally desirable, as it prevents malware intervention, or Windows itself updating registry references so that malware may remain integrated. But it also means no System Restore undoability, and the quarantine material from various scanners may be lost, and/or not work when attempts are made to restore these later.
For this reason, I usually scan to kill when dealing with intrafile code infectors and other hard-core malware, but scan to detect only, when it comes to commercial malware that I expect to pose more problems due to botched removal than malicious persistence. I defer clean-up of those to a later Safe Mode Cmd Only boot, so that undoability is maintained.
When it comes to rootkits, these are exposed to normal scanning just like any other inert file. Tools that aim to detect rootkit behavior will not have any such behavior to detect, unless the mOS has triggered the malware into action. It can also help to save integration checks (such as HiJackThis or Nirsoft utility logs) as redirected by RunScanner and compare these with logs saved from Safe Mode or normal Windows. Unexplained differences may suggest rootkit activity during your "Safe" or normal Windows sessions, unless the mOS tests were done based on the mOS's registry rather than the hard drive's hives.
Beyond the mOS session
A mOS disk can be useful even when not being used as a mOS. For example, it can Autorun to provide tools for use from Windows, be used as storage space for updates and installables, and can operate as a diskette builder for tasks the mOS cannot do from itself.
As an example of the last, my own Bart CDR can spawn bootable diskettes for BING, RAM testers, and various DOS boot disks containing various tools. The DOS boot diskettes can then access the Bart CDR and thus extend the range of available tools via an appropriate Path.
I also set up my Bart so that I can test the menu UI against the output build, even before it is committed to disk, and the installation of some tools can double up to be run from both the host system and from Bart CDRs built from it. This is accomplished mainly by careful use of base-relative paths within the nu2menu (the native shell for standard Bart) and batch file logic.
I've found nu2menu to be useful in its own right, and use a stand-alone menu to manage the entire Bart-building process - updating the scanners, selecting wallpaper and UI button graphics, editing and testing the nu2menus, accessing Bart forums and plugin documentation, and building the CDRs themselves.
10 September 2006
"...but you're Not a Programmer"
Never trust a programmer who says something can't be done (so don't worry about it)...
When programmers say something can't be done, they mean they can't see a way to do it - and after all, they made the code, so surely they would know, right?
When an interested non-programmer asks themselves if something can be done, they work from a higher level of abstraction, disregarding the details of how it might be done.
The programmer's views are informed by the intended behaviour of what they made, and may be blind to the full range of possible behaviors.
Look at the track record of exploitability that results from design safety failure; the MS Office macro malware generation, the email script generation, malware like Melissa that scripts Outlook to send itself out, and so on.
The stupidity/perfidity question (see previous blog entry, it's not Googleable yet) arises at this point, but either way, the result is the same; trust in these programmers may be misplaced. Either they weren't aware of the implications of what they created, and are thus liklely to fail the lower levels of the Trust Stack, or they have a hidden agenda that fails the upper levels of that stack.
Either way, I wouldn't stop worrying because they tell me to.
When programmers say something can't be done, they mean they can't see a way to do it - and after all, they made the code, so surely they would know, right?
When an interested non-programmer asks themselves if something can be done, they work from a higher level of abstraction, disregarding the details of how it might be done.
The programmer's views are informed by the intended behaviour of what they made, and may be blind to the full range of possible behaviors.
Look at the track record of exploitability that results from design safety failure; the MS Office macro malware generation, the email script generation, malware like Melissa that scripts Outlook to send itself out, and so on.
The stupidity/perfidity question (see previous blog entry, it's not Googleable yet) arises at this point, but either way, the result is the same; trust in these programmers may be misplaced. Either they weren't aware of the implications of what they created, and are thus liklely to fail the lower levels of the Trust Stack, or they have a hidden agenda that fails the upper levels of that stack.
Either way, I wouldn't stop worrying because they tell me to.
Mistake or Malice?
I was going to call this "Perfidity or Stupidity", until I saw the lean number of Google hits for "perfidity", and that Chambers Dictionary can't find the word. In any case, it may be better to avoid the perjurative aspects of "stupidity" :-)
We perform the Turing Test every day (and often lose) whenever we have to consider whether material is from a human (e.g. email from a user) or bot (e.g. email form a user's infected computer). This is a generalized identity/category test, similar to "is this my bank's site, or is it a phishing site?"
When we find something that sucks (or is downright dangerous) we also ask ourselves; were they stupid and did this by accident, or are they perfidious and did this to further a hidden and possibly malicious agenda?
This question runs as a vertical slash through the Trust Stack. Things that would be errors in the lower levels of the stack if there by mistake, would in fact be a failure in the top levels of the stack if they were there intentionally. This applies particularly at the safety and desgn layer of the stack, which may be where most exploitability occurs.
We perform the Turing Test every day (and often lose) whenever we have to consider whether material is from a human (e.g. email from a user) or bot (e.g. email form a user's infected computer). This is a generalized identity/category test, similar to "is this my bank's site, or is it a phishing site?"
When we find something that sucks (or is downright dangerous) we also ask ourselves; were they stupid and did this by accident, or are they perfidious and did this to further a hidden and possibly malicious agenda?
This question runs as a vertical slash through the Trust Stack. Things that would be errors in the lower levels of the stack if there by mistake, would in fact be a failure in the top levels of the stack if they were there intentionally. This applies particularly at the safety and desgn layer of the stack, which may be where most exploitability occurs.
07 September 2006
DRM Revocation List
DRM is inherently user-hostile, acting against user interests under a cover of stealth and mystery. As such, I'd classify it as commercial malware. It's politically significant as by design, it facilitates control over users' digital resources to be exercised by global agencies. So there are problems at the top of the "trust stack" - but this post isn't about that.
One of the features of DRM is the revocation list concept. This is a list of applications approved to work with DRM-protected material, and the EUL"A" will typically allow this list to be updated in real time as the list's originators see fit.
The idea is that if media playing device X was cracked to subvert DRM protection of content Y, then the ability to use device X would be revoked.
For now, I'll leave aside the obvious questions, such as:
For example, if a media provider was caught dropping open-use rootkits from "audio CDs" (hey, that would never happen, right?) one possible remedy would be to revoke all of that provider's rights over all of their material. In essence, such a provider would be found unfit to exert any sort of control over any users, and be swept off the DRM playing field.
Obviously, this would materially reduce the value of that media provider to the artists who are contracted to it - in effect, the penalty undermines the contract with the artist, because the provider can no longer protect the artist's content. So for a certain period (say, a year) the artist has the right to drop their obligations to the provider and seek a new contract elsewhere. The reverse right does not apply, i.e. the provider cannot drop the artist if the artist chooses to stay.
Further, it has to be accepted that all existing protected material from that provider is now unprotected - so for a similar period, artists can sue the provider for damages, either as a class group or outside of any class action.
If this would seem to tip the scales to the extent that the media provider's business would be smashed, then fine. After all, were the provider to be an individual caught "hacking", they'd likely lose their livelyhood and do jail time - why should larger-scale criminals get off more leniently? Do we really want to leave known exploiters in the provider pool?
I'll bet there's no plans in place to use DRM revocation lists to defend users' rights in this manner, even though it's technically feasable. That speaks volumes on why one should IMO reject this level of real-time DRM intrusion. On the other hand, once you open up DRM revocation for broader use, why not use it to apply global government censorship, etc.? After all, there's nothing to limit it within the borders of any particular jurisdiction.
One of the features of DRM is the revocation list concept. This is a list of applications approved to work with DRM-protected material, and the EUL"A" will typically allow this list to be updated in real time as the list's originators see fit.
The idea is that if media playing device X was cracked to subvert DRM protection of content Y, then the ability to use device X would be revoked.
For now, I'll leave aside the obvious questions, such as:
- Who controls the list?
- Who else controls the list, i.e. as associates or legally-mandated?
- Who else controls the list, by hacking into the list updates?
- How well-bounded is the list mechanism to what it is supposed to do?
- Who is accepted as a DRM content provider, and on what basis?
For example, if a media provider was caught dropping open-use rootkits from "audio CDs" (hey, that would never happen, right?) one possible remedy would be to revoke all of that provider's rights over all of their material. In essence, such a provider would be found unfit to exert any sort of control over any users, and be swept off the DRM playing field.
Obviously, this would materially reduce the value of that media provider to the artists who are contracted to it - in effect, the penalty undermines the contract with the artist, because the provider can no longer protect the artist's content. So for a certain period (say, a year) the artist has the right to drop their obligations to the provider and seek a new contract elsewhere. The reverse right does not apply, i.e. the provider cannot drop the artist if the artist chooses to stay.
Further, it has to be accepted that all existing protected material from that provider is now unprotected - so for a similar period, artists can sue the provider for damages, either as a class group or outside of any class action.
If this would seem to tip the scales to the extent that the media provider's business would be smashed, then fine. After all, were the provider to be an individual caught "hacking", they'd likely lose their livelyhood and do jail time - why should larger-scale criminals get off more leniently? Do we really want to leave known exploiters in the provider pool?
I'll bet there's no plans in place to use DRM revocation lists to defend users' rights in this manner, even though it's technically feasable. That speaks volumes on why one should IMO reject this level of real-time DRM intrusion. On the other hand, once you open up DRM revocation for broader use, why not use it to apply global government censorship, etc.? After all, there's nothing to limit it within the borders of any particular jurisdiction.
27 August 2006
Safety First
Personal computers have gone from geek hobby, to useful private tools, to ubiquitous globally-connected life repositories. Today, we're as likely to conduct finance and store memories on the PC as we are to cruise around the web - on the same PC.
That means "make it easy to use" should change to "make it easy to use safety". Yet the level of knowledge needed to use the PC is way lower than skills needed to use it safely, and IMO it borders on criminal negligence to deepen that trend. That's like making handguns lighter with less trigger pull required so that toddlers could use them "more easily".
To use a PC...
...you need to know how to press a button, click one of two mouse buttons, and familiarity with the alphabet so you can type. It's useful to know about "folders", but Vista seeks to remove even that semi-requirement. If you can click what you see, you can use the PC.
To use a PC safely...
...you need to know about file types and the levels of risk they represent, and that information is hidden from you by default. In fact, the UI that makes things "so easy" does nothing to help you assess risk, nor is it constrained to act within the risk indicators it displays.
You also need an unhealthy amount of de-spin and paranoia. Almost everything you see has to be reversed through the mirror of suspicion; "value" isn't, "free" can gouge you, "click to unsubscribe" means "don't click else you'll get more spam", and so on. The endless cynicism and lies can be damaging to the psyche, and I often wonder if usability studies into UI stress ever take this factor into account.
What we need to know
You wouldn't dream of wiring house so that it wasn't possible to know what sockets and wires were "live" or not, nor would you make firearms such that it was impossible to tell if they were loaded or not, had the safety catch on or not, or which way they were pointing.
So why do we accept computers that use the meaningless term "open" that hides what a file can do when used? Why do we use an interface that makes no distinction between what is on our PC and what is from some arbitrary system out on the 'net?
The basic things we need to know at all times are:
As owners of our own PCs, we have the right to whatever we like with any file on our systems. We may not expect that right when we use our employer's PC at the workplace, but at home, there is no-one who should override our control.
History
In the old days of DOS, you had to know more to use a PC, but beyond that, all you needed to know was not to run files with names ending in .exe, .com or .bat or boot off strange disks. Hidden files weren't that hidden, and it was quite easy to manage risky files because they wouldn't be run unless triggered from one of two editable files. Only when viruses injected themselves into existing code files or boot code, did one need antivirus tools to clean up.
The first safety failure was loss of the data/code distinction, when Windows 95 hid file name extensions by default, and when MS Office applications started auto-running macros within "data" files. Windows 95 also hid hidden files, as well as where you were in the file system.
The second safety failure was when Internet Explorer 4 shell integration blurred the distinction between what was on your PC and what was not. Local material was often presented in a web-like way, while the local file browser could seamlessly display off-PC content. The new web standards also allowed web sites to spawn dialog boxes that looked as if they were part of the local system, as well as drop and run code on visitors' computers.
The third safety failure includes all mechanisms whereby code can run without user consent; from CDs that autorun when inserted, to code that gropes inside file content when all we wanted was to see a list of files, to "network services" that allow any entity on the Internet to silently initiate a machine dialog, as exploited by the Slammer, Lovesan and Sasser generations.
The fourth safety failure will be a loss of awareness as to where we are within the file system. As long as different files in different parts of the file system can present themselves as being "the same", we need to know the full and unambiguous path to a file to know which it is.
Vista
Vista tries to make computing easier by dumbing down "where things are", but makes "safe hex" as difficult as ever. File name extensions and paths are still hidden, as are hidden files and ADS. You still need to know an arcane list of file name extensions, you still need to bang the UI to actually show you these, and if anything the OS is more likely to ignore the extension when "opening" the file, acting on embedded information hidden within the file.
Just as the web enraptured Microsoft in the days of Internet Explorer 4, so "search" is enrapturing them now. Today's users may rarely type in a URL to reach a site; they are more likely to search for things via Google, and Vista brings the same "convenience" to your own PC. You're encouraged to ignore the namespace tree of locations and names, and simply type what you want so that the OS can guess what you want and "open" it for you.
The other growing risk in Vista, is that of automatic metadata processing. The converse of "any non-trivial code has bugs" is "if you want bugless code, keep it trivial". The traditional DOS directory entry is indeed trivial enough to be pretty safe, but I suspect the richer metadata embraced by NTFS is non-trivial enough to offer exploit opportunities - and that's before you factor in 3rd-party extensibility and malicious "metadata handlers".
Vista continues the trend of XP in that metadata and actual file content may be groped when you display a list of files, or when you do nothing at all (think thumbnailers, indexers etc.). If something manages to exploit these automatically-exposed surfaces, it allows loose malware files to run without any explicit integration you might detect and manage using tools such as HiJackThis or MSConfig. Removing such files may be impossible, if all possible OSs that can read the file system are also exploitable by the malicious content.
Exploitability
By now, we know that any code can be found to be exploitable, so that actual outcome of contact with material may bear no resemblence to what the code was supposed to do with it. Some have suggested this means we should abandon any pretence at a data/code distinction, and treat all material as if it posed the high risk of code.
IMO, that's a fatuous approach. Use of the Internet involves interaction with strangers, where identity is not only unprovable, but meaningless. That requires us to safely deal with content from arbitrary sources; only when we initiate a trust relationship (e.g. by logging in to a specific site) does identity start to mean something.
Instead, the message I take home from this is that any subsystem may need to be amputated at any time - including particular file types, irrespective of how safe they are supposed to be. For example, if .RTF files are found to be exploitable, I'd want to elevate expected risk of .RTF to that of code files until I know the risk is patched.
A pervasive awareness of exploitability dictates the following:
Making safety easier
Vista tries hard in the wrong places (user rights), though that approach is becoming more appropriately tuned - but that's another subject! What we need is:
Run vs. View or Edit
Let's see the death of "open"; it means nothing, in a context where we need meaning.
First, we need to re-create a simple data vs. code distinction, and force the OS to respect this so that we as users can trust what is shown to us.
Every time material is shown to use in a context that allows us to interact with it, we should be shown whether it is code or data. It's no use hiding this as a pop-up toolbar, extra column in detail view, some peripheral text in a status bar, or requiring a right-click and Properties.
Then we need to use terms such as Run or Launch to imply code behavior, as opposed to View or Edit to imply data behavior. You could View or Edit code material too, but doing so would not run it!
It would also help to show the file type as well, so that if a type that should be "data" becomes "code" due to code exploitability, we could avoid that risk. It's important that the system derives this type information in a trivial way (i.e. no deep metadata digging) and respects it (i.e. material is always handled as the type shown).
Safe handling and context awareness
Microsoft has juggled with various "My..." concepts for a while now, but there's no safety aspect to this as yet. Indeed, Microsoft encourages you to mix arbitrary downloads and incoming attachments with your own data files, as well as recommending the storage of infectable code fiels within "My Documents" as a way of hiding them from System Restore.
What we need is a new clue; that incoming material and infectable files are not safe to treat as if they were data files, nor should they be mixed with your data files that would be restored in the case of some system melt-down. I've applied this clue for many years now, and it does make system management a lot easier.
Once you herd all incoming and risky material into one subtree, you can add safer behaviors for that subtree - such as always showing file name extensions and paths, and never digging into metadata even to display imbedded icons.
These safer behaviours can be wrapped up as a "Safe View" mode, which can then be automatically applied to other hi-risk contexts, such as when new drives are discovered, or the system is operated in Safe Mode, or when one is running the maintenance OS from DVD boot.
Change the mindset
Currently, we encourage newbies to jump in and use everything. Then we suggest to interested survivors that they learn and apply some safety tips.
Newbies may see a suggestion to turn on the firewall, install and update an antivirus scanner, and swallow patches automatically - but we don't talk about file type risks, and we encourage them to send attachments without suggesting they should avoid doing so.
IMO, the first mention of sending email to more than one recipient should explain and recommend the use of BCC:, and users who know nothing about file types or the need for meaningful descriptive message text should not be shown how to send attachments.
In other words, safety should be learned at the same time as how to do things, rather than offered as an afterthought, and it should be as easy to operate a PC safely as it is to operate it at all.
That means "make it easy to use" should change to "make it easy to use safety". Yet the level of knowledge needed to use the PC is way lower than skills needed to use it safely, and IMO it borders on criminal negligence to deepen that trend. That's like making handguns lighter with less trigger pull required so that toddlers could use them "more easily".
To use a PC...
...you need to know how to press a button, click one of two mouse buttons, and familiarity with the alphabet so you can type. It's useful to know about "folders", but Vista seeks to remove even that semi-requirement. If you can click what you see, you can use the PC.
To use a PC safely...
...you need to know about file types and the levels of risk they represent, and that information is hidden from you by default. In fact, the UI that makes things "so easy" does nothing to help you assess risk, nor is it constrained to act within the risk indicators it displays.
You also need an unhealthy amount of de-spin and paranoia. Almost everything you see has to be reversed through the mirror of suspicion; "value" isn't, "free" can gouge you, "click to unsubscribe" means "don't click else you'll get more spam", and so on. The endless cynicism and lies can be damaging to the psyche, and I often wonder if usability studies into UI stress ever take this factor into account.
What we need to know
You wouldn't dream of wiring house so that it wasn't possible to know what sockets and wires were "live" or not, nor would you make firearms such that it was impossible to tell if they were loaded or not, had the safety catch on or not, or which way they were pointing.
So why do we accept computers that use the meaningless term "open" that hides what a file can do when used? Why do we use an interface that makes no distinction between what is on our PC and what is from some arbitrary system out on the 'net?
The basic things we need to know at all times are:
- Whether a file is "code" or "data"
- Whether something is on our PC or from outside the PC
- Where we are in the file system
As owners of our own PCs, we have the right to whatever we like with any file on our systems. We may not expect that right when we use our employer's PC at the workplace, but at home, there is no-one who should override our control.
History
In the old days of DOS, you had to know more to use a PC, but beyond that, all you needed to know was not to run files with names ending in .exe, .com or .bat or boot off strange disks. Hidden files weren't that hidden, and it was quite easy to manage risky files because they wouldn't be run unless triggered from one of two editable files. Only when viruses injected themselves into existing code files or boot code, did one need antivirus tools to clean up.
The first safety failure was loss of the data/code distinction, when Windows 95 hid file name extensions by default, and when MS Office applications started auto-running macros within "data" files. Windows 95 also hid hidden files, as well as where you were in the file system.
The second safety failure was when Internet Explorer 4 shell integration blurred the distinction between what was on your PC and what was not. Local material was often presented in a web-like way, while the local file browser could seamlessly display off-PC content. The new web standards also allowed web sites to spawn dialog boxes that looked as if they were part of the local system, as well as drop and run code on visitors' computers.
The third safety failure includes all mechanisms whereby code can run without user consent; from CDs that autorun when inserted, to code that gropes inside file content when all we wanted was to see a list of files, to "network services" that allow any entity on the Internet to silently initiate a machine dialog, as exploited by the Slammer, Lovesan and Sasser generations.
The fourth safety failure will be a loss of awareness as to where we are within the file system. As long as different files in different parts of the file system can present themselves as being "the same", we need to know the full and unambiguous path to a file to know which it is.
Vista
Vista tries to make computing easier by dumbing down "where things are", but makes "safe hex" as difficult as ever. File name extensions and paths are still hidden, as are hidden files and ADS. You still need to know an arcane list of file name extensions, you still need to bang the UI to actually show you these, and if anything the OS is more likely to ignore the extension when "opening" the file, acting on embedded information hidden within the file.
Just as the web enraptured Microsoft in the days of Internet Explorer 4, so "search" is enrapturing them now. Today's users may rarely type in a URL to reach a site; they are more likely to search for things via Google, and Vista brings the same "convenience" to your own PC. You're encouraged to ignore the namespace tree of locations and names, and simply type what you want so that the OS can guess what you want and "open" it for you.
The other growing risk in Vista, is that of automatic metadata processing. The converse of "any non-trivial code has bugs" is "if you want bugless code, keep it trivial". The traditional DOS directory entry is indeed trivial enough to be pretty safe, but I suspect the richer metadata embraced by NTFS is non-trivial enough to offer exploit opportunities - and that's before you factor in 3rd-party extensibility and malicious "metadata handlers".
Vista continues the trend of XP in that metadata and actual file content may be groped when you display a list of files, or when you do nothing at all (think thumbnailers, indexers etc.). If something manages to exploit these automatically-exposed surfaces, it allows loose malware files to run without any explicit integration you might detect and manage using tools such as HiJackThis or MSConfig. Removing such files may be impossible, if all possible OSs that can read the file system are also exploitable by the malicious content.
Exploitability
By now, we know that any code can be found to be exploitable, so that actual outcome of contact with material may bear no resemblence to what the code was supposed to do with it. Some have suggested this means we should abandon any pretence at a data/code distinction, and treat all material as if it posed the high risk of code.
IMO, that's a fatuous approach. Use of the Internet involves interaction with strangers, where identity is not only unprovable, but meaningless. That requires us to safely deal with content from arbitrary sources; only when we initiate a trust relationship (e.g. by logging in to a specific site) does identity start to mean something.
Instead, the message I take home from this is that any subsystem may need to be amputated at any time - including particular file types, irrespective of how safe they are supposed to be. For example, if .RTF files are found to be exploitable, I'd want to elevate expected risk of .RTF to that of code files until I know the risk is patched.
A pervasive awareness of exploitability dictates the following:
- No system-initiated handling of arbitrary material
- Strict file type discipline, i.e. abort rather than "open" any mis-labeled content
Making safety easier
Vista tries hard in the wrong places (user rights), though that approach is becoming more appropriately tuned - but that's another subject! What we need is:
Run vs. View or Edit
Let's see the death of "open"; it means nothing, in a context where we need meaning.
First, we need to re-create a simple data vs. code distinction, and force the OS to respect this so that we as users can trust what is shown to us.
Every time material is shown to use in a context that allows us to interact with it, we should be shown whether it is code or data. It's no use hiding this as a pop-up toolbar, extra column in detail view, some peripheral text in a status bar, or requiring a right-click and Properties.
Then we need to use terms such as Run or Launch to imply code behavior, as opposed to View or Edit to imply data behavior. You could View or Edit code material too, but doing so would not run it!
It would also help to show the file type as well, so that if a type that should be "data" becomes "code" due to code exploitability, we could avoid that risk. It's important that the system derives this type information in a trivial way (i.e. no deep metadata digging) and respects it (i.e. material is always handled as the type shown).
Safe handling and context awareness
Microsoft has juggled with various "My..." concepts for a while now, but there's no safety aspect to this as yet. Indeed, Microsoft encourages you to mix arbitrary downloads and incoming attachments with your own data files, as well as recommending the storage of infectable code fiels within "My Documents" as a way of hiding them from System Restore.
What we need is a new clue; that incoming material and infectable files are not safe to treat as if they were data files, nor should they be mixed with your data files that would be restored in the case of some system melt-down. I've applied this clue for many years now, and it does make system management a lot easier.
Once you herd all incoming and risky material into one subtree, you can add safer behaviors for that subtree - such as always showing file name extensions and paths, and never digging into metadata even to display imbedded icons.
These safer behaviours can be wrapped up as a "Safe View" mode, which can then be automatically applied to other hi-risk contexts, such as when new drives are discovered, or the system is operated in Safe Mode, or when one is running the maintenance OS from DVD boot.
Change the mindset
Currently, we encourage newbies to jump in and use everything. Then we suggest to interested survivors that they learn and apply some safety tips.
Newbies may see a suggestion to turn on the firewall, install and update an antivirus scanner, and swallow patches automatically - but we don't talk about file type risks, and we encourage them to send attachments without suggesting they should avoid doing so.
IMO, the first mention of sending email to more than one recipient should explain and recommend the use of BCC:, and users who know nothing about file types or the need for meaningful descriptive message text should not be shown how to send attachments.
In other words, safety should be learned at the same time as how to do things, rather than offered as an afterthought, and it should be as easy to operate a PC safely as it is to operate it at all.
19 August 2006
The Trust Stack
Some readers will be familiar with the OSI network stack model, which helps clarify issues when troubleshooting network or connectivity problems. I propose a similar 7-layer stack for evaluating trustworthiness within Information Technology contexts.
Each layer rests on the layer below it, and cannot be effective if a lower layer fails. Conversely, if the top layer is rotten, then all they layers below are no longer relevant.
Goals
Are the goals of your vendor compatible with your own, or are they contrary to these? The adage "he who pays the piper, calls the tune" applies here; if it is not you who provide the vendor's income stream, then it's not likely to be your needs that are uppermost in the vendor's mind. Even if the vendor derives income from you, the vendor may afford to ignore your needs if you are perceived to have no choice other than to buy their product.
Intention
What is the intention of the specific thing you are evaluating? If it is intended to do something that is contrary to your interests, then at best it can be trusted only to work against your interests in the way intended.
Policy
The vendor may commit itself to policies (e.g. a privacy policy) or may be compelled to act within policies laid down by law. For example, a privacy policy may defines what the vendor would do with your data, were they to be the sole agent with access to it.
Security
This goes about limiting who has what abilities within the system. For example, a privacy policy is meaningless if entities other than the vendor also have access to data held by their system.
Safety
This goes about the level of risk (or range of possible consequences) within the system, and whether this is constrained to user's expectations. It's no use securing access so that only trusted employees can operate the system, if the system takes greater risks than the trusted employees expect when they operate it.
Sanity
This goes about whether the system acts as it was designed to do, or whether defects create opportunities for it to act completely differently. For example, a defective JPG handler can escalate the risk of handling "graphic data" to running raw code; something that bears no resemblance to what the handler was created to do.
Granularity
The above six layers are top-down, though some contexts may make more sense if Policy is considered to run above Intention. The seventh layer is different; it rides next to everything else, and each instance encompases all of the other six layers.
That's because the vendor can open the system out to additional players at every level. There may be co-owners (or successive owners, e.g. after a buy-out) with different goals; different coding teams may have different intentions, different departments or legislation may stipulate different policies, and the actual code may re-use modules developed by different vendors.
In addition to problems within each of these players, problems can arise at the interface between them. In an earlier blog post, I mentioned the rule that "users know less than you think", i.e. no matter how little you expect users to understand about your product, they will understand (or care) even less. This applies not only between end-user and product, but between each coding level and the objects re-used by those coders.
Trusted Computing
Now apply these tests to the concept of "Trusted Computing". The vendor who coined the phrase derives income from us who pay for Windows, but is the monopoly provider of the OS required to run applications written for it. The stated goals and intentions of the OS are to leverage interests of certain business partners over ours via DRM; in fact, "Trusted Computing" initially meant that media corporations could "trust" user's systems to be constrained from violating corporate interests.
So already, we have problems at the top of the trust stack, especially when we look at the track record of previous behavior. We also have a top-level granularity problem; the OS vendor empowers a class of "media providers" to leverage their rights over ours.
How is this class of "media providers" bounded? Free speech requires anyone to be accepted as a provider of content, which means we're expected to trust anyone to have rights that trump our own on our own systems. Or you could constrain these powers to a small cartel of well-resourced corporations, trading freedom of expression for putative trustworthiness of computing. Given that one of the largest media corporations has already been caught dropping rootkits onto PCs from "audio" CDs, I don't have much faith there.
If you look at the problem from the bottom up, it doesn't get better - the raw materials out of which "trusted computing" is to be built, are already so failure-prone as to require regular repairs that are limited to monthly patching for convenience.
Bottom line
We already trust computing, even though evidence proves it's unworthy of trust.
When we allow software to download and apply patches without explicitly reviewing or testing these, we break the best practice of allowing no unauthorised changes to be made to the system. Why would we give blanket authorization to any patches the vendor chooses to push?
It isn't because we trust the vendor's goals and intentions, given the vendor has already been caught pushing through user-hostile code (Genuine Windows Notification) as a "critical update".
And it isn't because we trust the quality of the code, given the patches are to fix defects in the same vendor's existing production code.
It's because the code fails so often that it has become impossible to keep up with all the repairs required to fix it. In other words, we trust the system because it is so untrustworthy that we can no longer evaluate its trustworthiness, and have to trust the untrustworthy vendor instead.
Each layer rests on the layer below it, and cannot be effective if a lower layer fails. Conversely, if the top layer is rotten, then all they layers below are no longer relevant.
Goals
Are the goals of your vendor compatible with your own, or are they contrary to these? The adage "he who pays the piper, calls the tune" applies here; if it is not you who provide the vendor's income stream, then it's not likely to be your needs that are uppermost in the vendor's mind. Even if the vendor derives income from you, the vendor may afford to ignore your needs if you are perceived to have no choice other than to buy their product.
Intention
What is the intention of the specific thing you are evaluating? If it is intended to do something that is contrary to your interests, then at best it can be trusted only to work against your interests in the way intended.
Policy
The vendor may commit itself to policies (e.g. a privacy policy) or may be compelled to act within policies laid down by law. For example, a privacy policy may defines what the vendor would do with your data, were they to be the sole agent with access to it.
Security
This goes about limiting who has what abilities within the system. For example, a privacy policy is meaningless if entities other than the vendor also have access to data held by their system.
Safety
This goes about the level of risk (or range of possible consequences) within the system, and whether this is constrained to user's expectations. It's no use securing access so that only trusted employees can operate the system, if the system takes greater risks than the trusted employees expect when they operate it.
Sanity
This goes about whether the system acts as it was designed to do, or whether defects create opportunities for it to act completely differently. For example, a defective JPG handler can escalate the risk of handling "graphic data" to running raw code; something that bears no resemblance to what the handler was created to do.
Granularity
The above six layers are top-down, though some contexts may make more sense if Policy is considered to run above Intention. The seventh layer is different; it rides next to everything else, and each instance encompases all of the other six layers.
That's because the vendor can open the system out to additional players at every level. There may be co-owners (or successive owners, e.g. after a buy-out) with different goals; different coding teams may have different intentions, different departments or legislation may stipulate different policies, and the actual code may re-use modules developed by different vendors.
In addition to problems within each of these players, problems can arise at the interface between them. In an earlier blog post, I mentioned the rule that "users know less than you think", i.e. no matter how little you expect users to understand about your product, they will understand (or care) even less. This applies not only between end-user and product, but between each coding level and the objects re-used by those coders.
Trusted Computing
Now apply these tests to the concept of "Trusted Computing". The vendor who coined the phrase derives income from us who pay for Windows, but is the monopoly provider of the OS required to run applications written for it. The stated goals and intentions of the OS are to leverage interests of certain business partners over ours via DRM; in fact, "Trusted Computing" initially meant that media corporations could "trust" user's systems to be constrained from violating corporate interests.
So already, we have problems at the top of the trust stack, especially when we look at the track record of previous behavior. We also have a top-level granularity problem; the OS vendor empowers a class of "media providers" to leverage their rights over ours.
How is this class of "media providers" bounded? Free speech requires anyone to be accepted as a provider of content, which means we're expected to trust anyone to have rights that trump our own on our own systems. Or you could constrain these powers to a small cartel of well-resourced corporations, trading freedom of expression for putative trustworthiness of computing. Given that one of the largest media corporations has already been caught dropping rootkits onto PCs from "audio" CDs, I don't have much faith there.
If you look at the problem from the bottom up, it doesn't get better - the raw materials out of which "trusted computing" is to be built, are already so failure-prone as to require regular repairs that are limited to monthly patching for convenience.
Bottom line
We already trust computing, even though evidence proves it's unworthy of trust.
When we allow software to download and apply patches without explicitly reviewing or testing these, we break the best practice of allowing no unauthorised changes to be made to the system. Why would we give blanket authorization to any patches the vendor chooses to push?
It isn't because we trust the vendor's goals and intentions, given the vendor has already been caught pushing through user-hostile code (Genuine Windows Notification) as a "critical update".
And it isn't because we trust the quality of the code, given the patches are to fix defects in the same vendor's existing production code.
It's because the code fails so often that it has become impossible to keep up with all the repairs required to fix it. In other words, we trust the system because it is so untrustworthy that we can no longer evaluate its trustworthiness, and have to trust the untrustworthy vendor instead.
Why I Avoid Norton AV
What's the most important thing about an antivirus scanner?
That it detects 97% rather than "only" 95% of malware in a test?
Nope, not even close.
That you can keep it updated?
Closer, but that isn't it either.
No; the most important thing is that it works.
Norton Antivirus, on the other hand, is deliberately designed to not work - if it "thinks" it's being used in breach of its license conditions.
A while back, I added a new step in the process of disinfecting systems; right at the end of the Bart CDR boot phase, after doing the scans and checking integration points, I rename away all Temp and "Temporary Internet Files" locations so that any missed malware running from there will be unreachable when I boot Windows for the first time.
Over the last few weeks, I noticed several PCs would start Windows with an "Activate Norton Antivirus" nag, usually as "Your trial period has expired". Norton AV would not only not run, but would also not provide access to its quarantine or logs of previous scans.
Generally, I just shrug, uninstall it as the useless PoS it has proven to be, and replace it with a decent free scanner that works. I'm not going to phone clients to query license status, ask for product keys, etc. and as I neither sell nor recommend Norton, I wouldn't bother to troubleshoot it further unless paid clock time to do so.
However, I did do a Google( Norton Activation ) and that was verrry interesting...
http://www.extremetech.com/article2/0,1697,1395940,00.asp
http://www.extremetech.com/article2/0,1697,1396474,00.asp
http://www.eweek.com/article2/0,1895,1779931,00.asp
...as well as plenty of forum shrieks:
http://techrepublic.com.com/5208-6239-0.html?forumID=52&threadID=175000
http://www.computing.net/security/wwwboard/forum/15607.html
http://www.mcse.ms/archive182-2005-11-1890449.html
http://forums.pcworld.co.nz/archive/index.php/t-53985.html
As usual, it's doesn't fully meet the vandor's needs even as it screws the users:
http://www.theregister.co.uk/2003/09/22/norton_antivirus_product_activation_cracked/
Symantec offers the following hoops to jump through...
http://service1.symantec.com/SUPPORT/nav.nsf/docid/2003093015493306?Open&src=w
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2004122212374346?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092709273146?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092311012446?Open&src=&docid=20040324164239925&nsf=SUPPORT%5Cemeacustserv.nsf&view=eedocid&dtype=&prod=&ver=&osv=&osv_lvl=
...but why should you accept this mission? Why pay scumbags who embed commercial malware within a product ostensibly designed to help you counter malware? Tackling malware is tough enough without having to worry about whether each hidden file or hook is part of Norton's self-serving un-documented user-hostile code, or some other malware.
That it detects 97% rather than "only" 95% of malware in a test?
Nope, not even close.
That you can keep it updated?
Closer, but that isn't it either.
No; the most important thing is that it works.
Norton Antivirus, on the other hand, is deliberately designed to not work - if it "thinks" it's being used in breach of its license conditions.
A while back, I added a new step in the process of disinfecting systems; right at the end of the Bart CDR boot phase, after doing the scans and checking integration points, I rename away all Temp and "Temporary Internet Files" locations so that any missed malware running from there will be unreachable when I boot Windows for the first time.
Over the last few weeks, I noticed several PCs would start Windows with an "Activate Norton Antivirus" nag, usually as "Your trial period has expired". Norton AV would not only not run, but would also not provide access to its quarantine or logs of previous scans.
Generally, I just shrug, uninstall it as the useless PoS it has proven to be, and replace it with a decent free scanner that works. I'm not going to phone clients to query license status, ask for product keys, etc. and as I neither sell nor recommend Norton, I wouldn't bother to troubleshoot it further unless paid clock time to do so.
However, I did do a Google( Norton Activation ) and that was verrry interesting...
http://www.extremetech.com/article2/0,1697,1395940,00.asp
http://www.extremetech.com/article2/0,1697,1396474,00.asp
http://www.eweek.com/article2/0,1895,1779931,00.asp
...as well as plenty of forum shrieks:
http://techrepublic.com.com/5208-6239-0.html?forumID=52&threadID=175000
http://www.computing.net/security/wwwboard/forum/15607.html
http://www.mcse.ms/archive182-2005-11-1890449.html
http://forums.pcworld.co.nz/archive/index.php/t-53985.html
As usual, it's doesn't fully meet the vandor's needs even as it screws the users:
http://www.theregister.co.uk/2003/09/22/norton_antivirus_product_activation_cracked/
Symantec offers the following hoops to jump through...
http://service1.symantec.com/SUPPORT/nav.nsf/docid/2003093015493306?Open&src=w
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2004122212374346?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092709273146?Open&src=w&docid=2003093015493306&nsf=nav.nsf&view=docid
http://service1.symantec.com/SUPPORT/custserv.nsf/docid/2005092311012446?Open&src=&docid=20040324164239925&nsf=SUPPORT%5Cemeacustserv.nsf&view=eedocid&dtype=&prod=&ver=&osv=&osv_lvl=
...but why should you accept this mission? Why pay scumbags who embed commercial malware within a product ostensibly designed to help you counter malware? Tackling malware is tough enough without having to worry about whether each hidden file or hook is part of Norton's self-serving un-documented user-hostile code, or some other malware.
21 July 2006
Keylogger vs. Keylogger Blocker?
I followed up a bit on this Simon Scatt entity here:
http://sunbeltblog.blogspot.com/2006/07/simon-scatt-plague-on-security-blogs.html
Check out the "comments" to that blog post; it seems that the company he's/it's punting makes both keylogging and keylogger-blocking software. I wonder which wins?
http://sunbeltblog.blogspot.com/2006/07/simon-scatt-plague-on-security-blogs.html
Check out the "comments" to that blog post; it seems that the company he's/it's punting makes both keylogging and keylogger-blocking software. I wonder which wins?
The "Windows Constitution"
I'm very happy to see this:
http://www.microsoft.com/presspass/newsroom/winxp/WindowsPrinciples.mspx
Like a constitution, it provides a yardstick by which behavior can be judged. If problems arise in the future, one could link complaints to this statement as a way of highlighting any divergance from Microsoft's stated intentions.
http://www.microsoft.com/presspass/newsroom/winxp/WindowsPrinciples.mspx
Like a constitution, it provides a yardstick by which behavior can be judged. If problems arise in the future, one could link complaints to this statement as a way of highlighting any divergance from Microsoft's stated intentions.
Users Know Less Than You Think
I like to find big meta-truths that span platforms, and here's one:
This has been fairly obvious when it comes to end users and software authors; cue horror stories of floppies stapled to letters or copied onto A4 paper, and old jokes about cup-holders and power outages.
It's less obvious, but I suspect equally true, whenever one programmer's code is used by another - either as peers co-coding a project, or one software vendor using code objects (or APIs) created by another software vendor. Those cases also involve a "user" (the coder using the API or object) and "producer" (the author of the API or object).
For example, after spending months developing an ActiveX control for use by other programmers, you may think it reasonable to expect them to read your ReadMe.txt that contains caveats such as "parameter values must be in range". But someone who is using hundreds of such re-useable code objects in a project may assume how they work without reading any of those ReadMe.txt files.
A good test of acceptable expectations is: "What if everyone did what I'm about to do?"
This is also a good bulwark against badly-behaved software. What if all installed applications:
No matter how little you expect users to understand about your product, they will understand even lessI could replace "understand" with "know" or "care", for that matter.
This has been fairly obvious when it comes to end users and software authors; cue horror stories of floppies stapled to letters or copied onto A4 paper, and old jokes about cup-holders and power outages.
It's less obvious, but I suspect equally true, whenever one programmer's code is used by another - either as peers co-coding a project, or one software vendor using code objects (or APIs) created by another software vendor. Those cases also involve a "user" (the coder using the API or object) and "producer" (the author of the API or object).
For example, after spending months developing an ActiveX control for use by other programmers, you may think it reasonable to expect them to read your ReadMe.txt that contains caveats such as "parameter values must be in range". But someone who is using hundreds of such re-useable code objects in a project may assume how they work without reading any of those ReadMe.txt files.
A good test of acceptable expectations is: "What if everyone did what I'm about to do?"
This is also a good bulwark against badly-behaved software. What if all installed applications:
- Required admin rights to run?
- Kept pestering the user to "register"?
- Added themselves to the top of the Start Menu?
- Added themselves to the startup axis to "fast start"?
- Added their own ad-hoc systems to pull down updates?
- Added their own underfootware content indexing system?
- Patched into the shell to process file content whenever files are listed?
- Smashed file associations to just one "open" action for their own application?
When "Search" Finds Trouble
Once a bit of grey chit-chat is done, this post will lightly consider some "Social Engineering" risks of HTML and search.
This blog gets updated slightly more often than my web site, which says more about the web site than this blog! Readers used to less than one post a month may wonder about my relative bloggorrhea of late; I guess it's catch-up time, and there's more to talk about. I often find I have not enough time to go through the newsgroups, but enough time to post a blog or start on a web page, and now that is what I'll do.
Often long blog silences are because I've been (far) away from keyboard, as I'm blessed with reasons to travel combined with an ongoing enjoyment of doing so. I'd love to tell you about some excellent news in Vista, but I still need to pin down what/how I can tell you and what is still NDA.
One thing I can tell you, is that there are 200+ fake anti-spyware programs out there, and one of these is likely to be what my recent dogged commenter is pushing:
The thing is, "Simon Scatt" posts exactly the same comment to every post I make, no matter what that post is about - which smells like a bot. A combination of tech skills required to bot past the OCR challenge, plus the ethical dubiousness to actually do so, bodes poorly for the safety of whatever they are trying to push at you. Just Say No, and don't click that link!
Speaking of links clicked, I got a fright the last time I fired up this blog at http://quirke.blogspot.com to edit it. I thought "uh-oh, it's finally happened..." until I realised the link I'd entered should have been http://cquirke.blogspot.com
HTML being what it is, I could quite easily show you http://cquirke.blogspot.com as a link, which is reason enough to consider HTML unfit for use as a generic "rich text" medium between arbitrary (untrusted) entities. Retro-fitting anti-phishing logic to web browsers is an appropriate way to run after the horse after it's bolted from the stables, because web browsers have to live and breathe HTML. But a horse has no place in the living-room, and using HTML throughout the system as generic "rich text" (e.g. for email message "text" and elsewhere) has exactly that effect.
A bigger risk is that folks rarely type explicit URLs anymore; they either re-use links like the ones above, or they increasingly search rather than link. I wanted to link my text "200+ fake anti-spyware programs" to the CastleCops article that raised this issue, but as I didn't keep the link, I tried to search for it instead. I found something else I used that is a bit more topical, but the same search results could just as easily lead me to click something that bites.
Microsoft's been in love with search since MS Office started pushing Find Fast. A search for "Find Fast" is revealing; first comes an unrelated bit of foistware, then comes a flood of "how do I get rid if this thing?" links, starting with one from Microsoft themselves. Yet with each new version of MS Office, Find Fast has been more difficult to get rid of, and XP has the same thing built into the OS. Now that "Google envy" is kicking in, search is likely to pervade Vista's UI.
I do see some logic in this, in that the newest computers may better carry the overhead of search indexing, and Microsoft has leveraged deep new OS features (i.e. beyond the efficiencies of NTFS) in Vista to minimize this impact. We may well find that once we use it, the expected adverse impact isn't as bad as we'd expect and we may choose to live with it.
But performance impact is only one objection to dumbing down computer use from folder navigation to guessing at names or content. More worrying are the safety implications - an opportunity is created for incoming files to do what that top link in the "Find Fast" search does; thrust something inappropriate (and probably dangerous) into your face instead of what you wanted or expected.
This blog gets updated slightly more often than my web site, which says more about the web site than this blog! Readers used to less than one post a month may wonder about my relative bloggorrhea of late; I guess it's catch-up time, and there's more to talk about. I often find I have not enough time to go through the newsgroups, but enough time to post a blog or start on a web page, and now that is what I'll do.
Often long blog silences are because I've been (far) away from keyboard, as I'm blessed with reasons to travel combined with an ongoing enjoyment of doing so. I'd love to tell you about some excellent news in Vista, but I still need to pin down what/how I can tell you and what is still NDA.
One thing I can tell you, is that there are 200+ fake anti-spyware programs out there, and one of these is likely to be what my recent dogged commenter is pushing:
Simon Scatt said...
Many programms include spyware modules. Use anti-spyware for protect your privacy. As for me, I like professional anti-spy software like PrivacyKeyboard by Raytown Corporation LLC. You can download it here (URL snipped)
The thing is, "Simon Scatt" posts exactly the same comment to every post I make, no matter what that post is about - which smells like a bot. A combination of tech skills required to bot past the OCR challenge, plus the ethical dubiousness to actually do so, bodes poorly for the safety of whatever they are trying to push at you. Just Say No, and don't click that link!
Speaking of links clicked, I got a fright the last time I fired up this blog at http://quirke.blogspot.com to edit it. I thought "uh-oh, it's finally happened..." until I realised the link I'd entered should have been http://cquirke.blogspot.com
HTML being what it is, I could quite easily show you http://cquirke.blogspot.com as a link, which is reason enough to consider HTML unfit for use as a generic "rich text" medium between arbitrary (untrusted) entities. Retro-fitting anti-phishing logic to web browsers is an appropriate way to run after the horse after it's bolted from the stables, because web browsers have to live and breathe HTML. But a horse has no place in the living-room, and using HTML throughout the system as generic "rich text" (e.g. for email message "text" and elsewhere) has exactly that effect.
A bigger risk is that folks rarely type explicit URLs anymore; they either re-use links like the ones above, or they increasingly search rather than link. I wanted to link my text "200+ fake anti-spyware programs" to the CastleCops article that raised this issue, but as I didn't keep the link, I tried to search for it instead. I found something else I used that is a bit more topical, but the same search results could just as easily lead me to click something that bites.
Microsoft's been in love with search since MS Office started pushing Find Fast. A search for "Find Fast" is revealing; first comes an unrelated bit of foistware, then comes a flood of "how do I get rid if this thing?" links, starting with one from Microsoft themselves. Yet with each new version of MS Office, Find Fast has been more difficult to get rid of, and XP has the same thing built into the OS. Now that "Google envy" is kicking in, search is likely to pervade Vista's UI.
I do see some logic in this, in that the newest computers may better carry the overhead of search indexing, and Microsoft has leveraged deep new OS features (i.e. beyond the efficiencies of NTFS) in Vista to minimize this impact. We may well find that once we use it, the expected adverse impact isn't as bad as we'd expect and we may choose to live with it.
But performance impact is only one objection to dumbing down computer use from folder navigation to guessing at names or content. More worrying are the safety implications - an opportunity is created for incoming files to do what that top link in the "Find Fast" search does; thrust something inappropriate (and probably dangerous) into your face instead of what you wanted or expected.
20 July 2006
Security End-Users Can Trust
Here, I'm referring purely to the mechanics of how a user can believe what is on the screen, have faith that passwords can't be cracked, and so on. They say that "justice must not only be done, but must be seen to be done"; by the same token, security must be seen to done, or we are asking users to place blind faith in the good will and competence of those who the user is obliged to trust.
The core problem is that humans are weaker than computers when it comes to the amount of pure and arbitrary data they can perceive and remember.
Display
No matter how tightly-coded the security validation logic, and how strong the key strength, what the user will eventually see (and typically, pay cursory attention to) will be a bunch of pixels on the screen. Anything that can fake those pixels, will get trusted.
We know that to prevent an attacker brute-forcing or forging something, there has to be a minimum amount of information present. We like passwords to be randomized across a minimum number of bits, and we provide hard-to-copy cues in forgery-resistent material. So we have foil strips and watermarks in bank notes, hard-to-manufacture copper-on-aluminium software installation CD-ROMs, and so on.
When it comes to displaying something in a forgery-resistent manner, we are restricted to pixels that have 16 million possible color values, of which users may distinguish 100 or so at best. The entire screen area may be as low as 640 x 480 pixels. Anything can set any pixel to any color, so there's no way to prevent forgery.
Even if they were, humans cannot perceive and appreciate arbitrary pixel patterns. The brain will derive patterns from the raw data and the mind will evaluate these patterns. The raw data itself will not be fully "seen"; only a limited number of derived patterns.
Input
A large number of bits is the best-case strength for a key, applicable only if possible values are randomized over the key space, and if "cribs" (encrypted information for which the plain-text can be guessd) are not available. WEP failed both of these criteria; key strength was devalued by OEMs who left several bits in the key to known default values, and WEP traffic included a lot of stereotypical packets that provide "cribs".
Humans usually don't remember raw data; instead, they remember algorithms that can create this data. This skews values within the key space from a truly random spread, to preferred values that match the way humans think, and thus weakens the key strength.
So if it's easy to remember, it's easy to guess. If it's not easy to remember, then the user will write it down (or worse, enter it into a file on the system) and your strong password system becomes a weak and unmanaged token system. If you're going to use a token system anyway, then it's better to do this properly (e.g. biometrics, USB fobs, etc.).
User-managed passwords may be acceptable if you just want the semblance of due dilligance. You can point to your password policy, shrug about bad workers who break the policy, and seek a scapegoat whenever things go wrong. But once something that is essential to make things work is also disallowed, you lose management control, and examples of that abound.
Multiple Targets
Most assessments of key strength against brute-force or weighted-guess attacks assume that only one particular system is being targeted. The odds change considerably if you don't care what target you penetrate, and have millions of targets to choose from.
Instead of having to back off after 10 attempts due to some sort of password failure lock-out, you can simply make 9 attempts on a few thousand systems every hour or so. Eventually, you'll break into something, somewhere, and all stolen money is equally good.
Obviously, consumer ecommerce on the Internet presents this opportunity for a one-to-many relationship between attacker and victim. Slightly less obviously, it also facilitates a many-to-many relationship (the bane of database design) when the attacker can use multiple arbitrary malware-infected PCs as zombies from which to launch the attacks.
The core problem is that humans are weaker than computers when it comes to the amount of pure and arbitrary data they can perceive and remember.
Display
No matter how tightly-coded the security validation logic, and how strong the key strength, what the user will eventually see (and typically, pay cursory attention to) will be a bunch of pixels on the screen. Anything that can fake those pixels, will get trusted.
We know that to prevent an attacker brute-forcing or forging something, there has to be a minimum amount of information present. We like passwords to be randomized across a minimum number of bits, and we provide hard-to-copy cues in forgery-resistent material. So we have foil strips and watermarks in bank notes, hard-to-manufacture copper-on-aluminium software installation CD-ROMs, and so on.
When it comes to displaying something in a forgery-resistent manner, we are restricted to pixels that have 16 million possible color values, of which users may distinguish 100 or so at best. The entire screen area may be as low as 640 x 480 pixels. Anything can set any pixel to any color, so there's no way to prevent forgery.
Even if they were, humans cannot perceive and appreciate arbitrary pixel patterns. The brain will derive patterns from the raw data and the mind will evaluate these patterns. The raw data itself will not be fully "seen"; only a limited number of derived patterns.
Input
A large number of bits is the best-case strength for a key, applicable only if possible values are randomized over the key space, and if "cribs" (encrypted information for which the plain-text can be guessd) are not available. WEP failed both of these criteria; key strength was devalued by OEMs who left several bits in the key to known default values, and WEP traffic included a lot of stereotypical packets that provide "cribs".
Humans usually don't remember raw data; instead, they remember algorithms that can create this data. This skews values within the key space from a truly random spread, to preferred values that match the way humans think, and thus weakens the key strength.
So if it's easy to remember, it's easy to guess. If it's not easy to remember, then the user will write it down (or worse, enter it into a file on the system) and your strong password system becomes a weak and unmanaged token system. If you're going to use a token system anyway, then it's better to do this properly (e.g. biometrics, USB fobs, etc.).
User-managed passwords may be acceptable if you just want the semblance of due dilligance. You can point to your password policy, shrug about bad workers who break the policy, and seek a scapegoat whenever things go wrong. But once something that is essential to make things work is also disallowed, you lose management control, and examples of that abound.
Multiple Targets
Most assessments of key strength against brute-force or weighted-guess attacks assume that only one particular system is being targeted. The odds change considerably if you don't care what target you penetrate, and have millions of targets to choose from.
Instead of having to back off after 10 attempts due to some sort of password failure lock-out, you can simply make 9 attempts on a few thousand systems every hour or so. Eventually, you'll break into something, somewhere, and all stolen money is equally good.
Obviously, consumer ecommerce on the Internet presents this opportunity for a one-to-many relationship between attacker and victim. Slightly less obviously, it also facilitates a many-to-many relationship (the bane of database design) when the attacker can use multiple arbitrary malware-infected PCs as zombies from which to launch the attacks.
12 July 2006
Repairing XP's Firewall
This is another example of what happens when you break the "Safe should be boilerplate" rule (see the previous two posts).
Windows XP has a built-in firewall that is quite effective at keeping intruders out, but does little to prevent malware already in the system from calling home. This is in keeping with a current weakness in Microsoft's approach to Windows and malware - an almost total disregard for the need to reclaim PCs from the clutches of malware infection.
Once malware is active, it can take action against your defenses and tools, including XP's built-in firewall. This is as easy as attacking Safe Mode, and for the same reason; the firewall depends on registry settings that are easy to attack once you have admin-level access to the registry.
There's a good article on this situation here:
http://windowsxp.mvps.org/sharedaccess.htm
The previous post in this blog describes how to fix damaged Safeboot registry information; you can use similar tactics to fix the SharedAccess information that defines the firewall state, or you can use the sharedaccess.reg as linked from Ramesh's article mentioned above.
Windows XP has a built-in firewall that is quite effective at keeping intruders out, but does little to prevent malware already in the system from calling home. This is in keeping with a current weakness in Microsoft's approach to Windows and malware - an almost total disregard for the need to reclaim PCs from the clutches of malware infection.
Once malware is active, it can take action against your defenses and tools, including XP's built-in firewall. This is as easy as attacking Safe Mode, and for the same reason; the firewall depends on registry settings that are easy to attack once you have admin-level access to the registry.
There's a good article on this situation here:
http://windowsxp.mvps.org/sharedaccess.htm
The previous post in this blog describes how to fix damaged Safeboot registry information; you can use similar tactics to fix the SharedAccess information that defines the firewall state, or you can use the sharedaccess.reg as linked from Ramesh's article mentioned above.
Repairing Safe Mode (Safeboot)
Here's an example of what happens when you break the "Safe must be boilerplate" rule.
Many folks rely on Safe Mode to tackle active malware, on the basis that malware is less likely to be running if much of the startup axis is avoided when Windows starts up. But Safe Mode is defined in the registry, so anything that gets to run in XP can kill it off - a risk that's always been there, and one that I've highlighted in private forums often enough.
Now that malware is doing what I'd predicted, there's a need to repair the damage when encoutering it in the field. This blogger's article...
http://didierstevens.wordpress.com/2006/06/26/restoring-safeboot/
...describes three ways to do this, but these methods involve running Windows to do so. You may not want to do that, if the plan was to first do malware scans and cleanup in Safe Mode Command Only before allowing ?infected Windows to run.
If you are using Bart PE CDR boot as your initial-contact malware cleanup platform, then you can repair Safe Mode, the XP firewall, and any other registry settings damage in the following generic way; by harvesting settings from previous registry states and merging these into the current registry, before you try to boot the damaged hard drive XP installation.
Understanding Bart registry access
Bart PE is a free utility that builds a bootable subset of XP, from which one can launch many tools written for Windows. A problem with running such tools from a Bart PE CDR boot is that the registry they see will be that of the Bart boot, and not the hard drive installation you are trying to maintain.
Bart integrates tools as "plugins", using .INF-based wrappers that serve to "install" the tool at the time the bootable CDR is compiled. One such plugin is RunScanner, which facilitates transparent access to the inactive registry hives on the hard drive as if they were in effect.
RunScanner patches into the process it's running and redirects all registry calls to treat the designated hives on the hard drive as if they were the active registry. Command line parameters for RunScanner control whether there is to be a delay before this kicks in (so that the program can initialize through the Bart registry first), which hives are to be used, and so on.
Child processes are generally not affected, and that can complicate the use of tools that spawn processes which access the registry, e.g. Nirsoft's RegScanner.
Once you combine RunScanner with Regedit (as a standard Bart build may do automatically), you are in a position to fix registry problems as if you were running the stricken installation - without the risk of actually running that installation!
Binding arbitrary registry hives via Regedit
XP's Regedit allows you to bind arbitrary hive files to HKEY_LOCAL_MACHINE as if they were part of the registry. The hives won't generally be used by the system, but it makes it a lot easier to browse them and export things you'd be interested in.
In Regedit, you'd highlight HKEY_LOCAL_MACHINE and then go File menu, Load Hive - an option that is greyed out as unavailable if anything other than HKLM is selected - and then browse for a hive to bind. You will be prompted for a name to use, and the hive will appear as an extra subtree under HKLM using that name.
You can then browse the added material and export parts as .REG files in the usual way (tips; select "Win9x/NT4 Registration Files" in the type drop-down if you want to save as 8-bit ANSI for easier editing outside XP, and force a .TXT extension to reduce the risk of inadvertent import).
To prepare this material for import into the active registry, you'd have to search-and-replace the name you used when binding the hive to the correct name for the active form. It helps if you use a unique name when binding the hive, to reduce the risk of replacing the wrong stuff!
Finding backup copies of registry hives
The XP registry hive files fall into two types; system and per-user. You won't see them unless your shell is set to show all files and contents of system locations; also, you should not hide file name extensions otherwise you can't tell which are hives and which are .LOGs, etc.
The active system hive files are located in your System32\Config and have file names that have no extension at all, and the names are SYSTEM, SECURITY, SAM, SOFTWARE and DEFAULT.
The active per-user hive files are stored in the base of each user account subtree in "C:\Documents and Settings", plus there is the system user hive in your System32\Config\systemprofile directory. The file name is NTUSER.DAT
Like Win9x, backups of the initial system registry created when the OS is installed is kept as a last-resort fallback. These are held in the Repair directory within your Windows base directory with the same names as the active forms, though for some reason they may appear to have a .BAK extension when seen via Regedit's "load hive" dialog. Similar baseline backup hives are kept in System32\Config as .SAV files, but these may contain less content.
Unlike Win9x, XP does not automatically maintain a set of fresh registry backups. The "last known good" backup merely consists of part of one hive, stored within the same hive file; anything that corrupts the file will thus likely kill the internal backup.
I recommend setting up ERUNT as an overnight weekday Task to create such backups, keeping one for each day of the week by using the relevant ERUNT command line parameters in each Task. If you have ERUNT running in this way, you will have those backups to use, in addition to the ones I'll describe in a little while.
If running, the System Restore process creates fresh registry backups as part of each restore point. This is one reason why it's best not to purge System Restore, even though infected restore points will re-infect a clean system if they are restored. The file names are modified but obvious, and can be found in "C:\System Volume Information\_restore{**}\RP???\snapshot", where ** is an identifier unique to the particular XP installation, and ??? is the number of the restore point. It is by using an installation identifier that XP's SR avoids one installation's SR data overwriting another, as happens with Windows ME's \_RESTORE subtrees.
Note that the System Restore data is the only place where you will find backup copies of the per-user registry hives - an even more astonishing XP fagility than the lack of an independent automatic file-level hive backup facility!
Fixing the Safeboot registry subtree
Safe Mode is defined in HKLM\SYSTEM\CurrentControlSet\Control\Safeboot, where "CurrentControlSet" is a pointer to one of the ControlSet001, ControlSet002 etc. subtrees. When seen via Bart boot, RunScanner and Regedit, no control set is active, so you won't see any CurrentControlSet. In any case, you should operate on each explicit control set rather than just the current one!
The best previous registry hive from which to harvest Safeboot will usually be the SYSTEM (may be seen as System.bak via Regedit) in the Repair directory. Backups of SYSTEM in the RP???\snapshot directories may be more up to date, but if these date from after malware went active, they may be equally damaged or as malicious as what you are trying to fix.
Bind the hive file into Regedit as described previously; I'd use a nice unique name such as "!!ABCXYZ!!" when prompted. Then browse into the ControlSet, highlight Safeboot and go File menu, Export. I'd save as a "Win9x/NT Registration File" with the file name in quotes and using the .TXT extension; then I'd edit the file in Notepad or similar and replace all \!!ABCXYZ!!\ with \SYSTEM\ and save that. By importing that file via Regedit, I'd merge in the Safeboot to the corresponding ControlSet - to do all ControlSets, I'd edit the file accordingly before importing it.
Disclaimer: This stuff involves direct editing of the registry via Regedit using redirection tools from a free 3rd-party maintenance platform. Be careful, keep backups of everything you change, and eyeball to ensure you are in fact seeing the correct registry!
Many folks rely on Safe Mode to tackle active malware, on the basis that malware is less likely to be running if much of the startup axis is avoided when Windows starts up. But Safe Mode is defined in the registry, so anything that gets to run in XP can kill it off - a risk that's always been there, and one that I've highlighted in private forums often enough.
Now that malware is doing what I'd predicted, there's a need to repair the damage when encoutering it in the field. This blogger's article...
http://didierstevens.wordpress.com/2006/06/26/restoring-safeboot/
...describes three ways to do this, but these methods involve running Windows to do so. You may not want to do that, if the plan was to first do malware scans and cleanup in Safe Mode Command Only before allowing ?infected Windows to run.
If you are using Bart PE CDR boot as your initial-contact malware cleanup platform, then you can repair Safe Mode, the XP firewall, and any other registry settings damage in the following generic way; by harvesting settings from previous registry states and merging these into the current registry, before you try to boot the damaged hard drive XP installation.
Understanding Bart registry access
Bart PE is a free utility that builds a bootable subset of XP, from which one can launch many tools written for Windows. A problem with running such tools from a Bart PE CDR boot is that the registry they see will be that of the Bart boot, and not the hard drive installation you are trying to maintain.
Bart integrates tools as "plugins", using .INF-based wrappers that serve to "install" the tool at the time the bootable CDR is compiled. One such plugin is RunScanner, which facilitates transparent access to the inactive registry hives on the hard drive as if they were in effect.
RunScanner patches into the process it's running and redirects all registry calls to treat the designated hives on the hard drive as if they were the active registry. Command line parameters for RunScanner control whether there is to be a delay before this kicks in (so that the program can initialize through the Bart registry first), which hives are to be used, and so on.
Child processes are generally not affected, and that can complicate the use of tools that spawn processes which access the registry, e.g. Nirsoft's RegScanner.
Once you combine RunScanner with Regedit (as a standard Bart build may do automatically), you are in a position to fix registry problems as if you were running the stricken installation - without the risk of actually running that installation!
Binding arbitrary registry hives via Regedit
XP's Regedit allows you to bind arbitrary hive files to HKEY_LOCAL_MACHINE as if they were part of the registry. The hives won't generally be used by the system, but it makes it a lot easier to browse them and export things you'd be interested in.
In Regedit, you'd highlight HKEY_LOCAL_MACHINE and then go File menu, Load Hive - an option that is greyed out as unavailable if anything other than HKLM is selected - and then browse for a hive to bind. You will be prompted for a name to use, and the hive will appear as an extra subtree under HKLM using that name.
You can then browse the added material and export parts as .REG files in the usual way (tips; select "Win9x/NT4 Registration Files" in the type drop-down if you want to save as 8-bit ANSI for easier editing outside XP, and force a .TXT extension to reduce the risk of inadvertent import).
To prepare this material for import into the active registry, you'd have to search-and-replace the name you used when binding the hive to the correct name for the active form. It helps if you use a unique name when binding the hive, to reduce the risk of replacing the wrong stuff!
Finding backup copies of registry hives
The XP registry hive files fall into two types; system and per-user. You won't see them unless your shell is set to show all files and contents of system locations; also, you should not hide file name extensions otherwise you can't tell which are hives and which are .LOGs, etc.
The active system hive files are located in your System32\Config and have file names that have no extension at all, and the names are SYSTEM, SECURITY, SAM, SOFTWARE and DEFAULT.
The active per-user hive files are stored in the base of each user account subtree in "C:\Documents and Settings", plus there is the system user hive in your System32\Config\systemprofile directory. The file name is NTUSER.DAT
Like Win9x, backups of the initial system registry created when the OS is installed is kept as a last-resort fallback. These are held in the Repair directory within your Windows base directory with the same names as the active forms, though for some reason they may appear to have a .BAK extension when seen via Regedit's "load hive" dialog. Similar baseline backup hives are kept in System32\Config as .SAV files, but these may contain less content.
Unlike Win9x, XP does not automatically maintain a set of fresh registry backups. The "last known good" backup merely consists of part of one hive, stored within the same hive file; anything that corrupts the file will thus likely kill the internal backup.
I recommend setting up ERUNT as an overnight weekday Task to create such backups, keeping one for each day of the week by using the relevant ERUNT command line parameters in each Task. If you have ERUNT running in this way, you will have those backups to use, in addition to the ones I'll describe in a little while.
If running, the System Restore process creates fresh registry backups as part of each restore point. This is one reason why it's best not to purge System Restore, even though infected restore points will re-infect a clean system if they are restored. The file names are modified but obvious, and can be found in "C:\System Volume Information\_restore{**}\RP???\snapshot", where ** is an identifier unique to the particular XP installation, and ??? is the number of the restore point. It is by using an installation identifier that XP's SR avoids one installation's SR data overwriting another, as happens with Windows ME's \_RESTORE subtrees.
Note that the System Restore data is the only place where you will find backup copies of the per-user registry hives - an even more astonishing XP fagility than the lack of an independent automatic file-level hive backup facility!
Fixing the Safeboot registry subtree
Safe Mode is defined in HKLM\SYSTEM\CurrentControlSet\Control\Safeboot, where "CurrentControlSet" is a pointer to one of the ControlSet001, ControlSet002 etc. subtrees. When seen via Bart boot, RunScanner and Regedit, no control set is active, so you won't see any CurrentControlSet. In any case, you should operate on each explicit control set rather than just the current one!
The best previous registry hive from which to harvest Safeboot will usually be the SYSTEM (may be seen as System.bak via Regedit) in the Repair directory. Backups of SYSTEM in the RP???\snapshot directories may be more up to date, but if these date from after malware went active, they may be equally damaged or as malicious as what you are trying to fix.
Bind the hive file into Regedit as described previously; I'd use a nice unique name such as "!!ABCXYZ!!" when prompted. Then browse into the ControlSet, highlight Safeboot and go File menu, Export. I'd save as a "Win9x/NT Registration File" with the file name in quotes and using the .TXT extension; then I'd edit the file in Notepad or similar and replace all \!!ABCXYZ!!\ with \SYSTEM\ and save that. By importing that file via Regedit, I'd merge in the Safeboot to the corresponding ControlSet - to do all ControlSets, I'd edit the file accordingly before importing it.
Disclaimer: This stuff involves direct editing of the registry via Regedit using redirection tools from a free 3rd-party maintenance platform. Be careful, keep backups of everything you change, and eyeball to ensure you are in fact seeing the correct registry!
10 July 2006
"Safe" Should Be Boilerplate
You can think of "safe should be boilerplate" as a rule to avoid a basic conceptual error that leads to bugs and exploits.
Now that the shoe has dropped (ITW malware is killing Safe Mode by deleting the registry content that defines it), I can be a bit more public about this concept - that if something is to be "safe", it can't be defined by editable baseline data. Examples:
But XP's Safe Mode is flawed in several ways that create opportunities for malware:
http://didierstevens.wordpress.com/2006/06/26/restoring-safeboot/
I have a case like this at the moment, and will be trying a "case 4" approach as I described as a comment to that blog entry. If it works, and I can remember the exact method I use, I may write that up as a new blog entry here :-)
A less-obvious example of the "Safe should be boilerplate" rule is the option not to use a password. Normally that's done as a "blank password", rather than a true boilerplate absence of a password - and that becomes absurd when coupled with the usual "to set a new password, first enter the current password".
The trouble with the "Safe should be boilerplate" rule is that it precludes any fix-it-later patching. You have to make your boilerplate perfect, even if that means simplifying your code towards triviality in order to approach that perfection!
Now that the shoe has dropped (ITW malware is killing Safe Mode by deleting the registry content that defines it), I can be a bit more public about this concept - that if something is to be "safe", it can't be defined by editable baseline data. Examples:
- Web browser "blank" page
- Safe Mode startup axis
But XP's Safe Mode is flawed in several ways that create opportunities for malware:
- Entries can be added to (or persisted into) its startup axis
- It uses a different user account, therefore different per-account settings
- It runs a screensaver, which can be re-defined
- File associations now allow per-user overlay
- The "Cmd Prompt Only" shell can be re-defined
- The whole thing depends on a re-definable registry subtree
http://didierstevens.wordpress.com/2006/06/26/restoring-safeboot/
I have a case like this at the moment, and will be trying a "case 4" approach as I described as a comment to that blog entry. If it works, and I can remember the exact method I use, I may write that up as a new blog entry here :-)
A less-obvious example of the "Safe should be boilerplate" rule is the option not to use a password. Normally that's done as a "blank password", rather than a true boilerplate absence of a password - and that becomes absurd when coupled with the usual "to set a new password, first enter the current password".
The trouble with the "Safe should be boilerplate" rule is that it precludes any fix-it-later patching. You have to make your boilerplate perfect, even if that means simplifying your code towards triviality in order to approach that perfection!
06 May 2006
Marketoid Barf-Fest, Part 2
I've really come to loathe hooray-for-us hold "infomercials", especially when these extol the virtues of the same product that I'm holding to discuss DoA warranty management for.
All too often they use cheery voice actors who haven't a clue what they are reading; a trade-only distributor gains zero credibility when they have "Silicon Pines" droids reading out vacuous content-free nonsense. Imagine; you build and fix PCs for a living, and you have to listen to something like "The new ZootBox One-Two-Five has Sixty-Four Em Bee of RAM and a fast Ay-Em-Dee Micro Processor to meet All Your Future Computing Needs".... be still, my heaving lunch!
All too often they use cheery voice actors who haven't a clue what they are reading; a trade-only distributor gains zero credibility when they have "Silicon Pines" droids reading out vacuous content-free nonsense. Imagine; you build and fix PCs for a living, and you have to listen to something like "The new ZootBox One-Two-Five has Sixty-Four Em Bee of RAM and a fast Ay-Em-Dee Micro Processor to meet All Your Future Computing Needs".... be still, my heaving lunch!
25 March 2006
Radiation-Resistant Bugs
It's been said that cockroaches are resistant to radiation and might be the largest surviving animal following a nuclear onslaught. I've been wondering whether there are Darwinian factors that might have selected this, noting that cockroaches are both "old", and fairly simple creatures.
Being old, the cockroach may date from a long-past period during which naturally-occurring radiation bursts may have been a selection factor. For example, if water washes through a reef of fissionable material, it could slowly concentrate this material to a critical mass that would not explode like a nuclear bomb, but could cause a plume of radioactivity similar to a nuclear reactor mishap. Animals that could sense radiation and move away, and/or have better-shielded or more structurally-robust genetic material, might be selected in for survival in such environments.
More robust genetic material also means less mutability, which disfavors complexity and would tend to cause the organism to remain genetically unchanged over the ages, as may be the case with the cockroach.
Even if such circumstances did not occur on this planet, you could postulate a planet on which they did. A planet rich in fissionable materials would probably also be rich in heavy elements suitable for shielding, so a radiation-aware organism might not only flee the radiation it senses, but could also seek, create or incorporate appropriate shelter. You could also postulate organisms that might derive their energy needs from controlled exposure to the fission process.
Animals with radiation awareness could be useful things to have around :-)
Being old, the cockroach may date from a long-past period during which naturally-occurring radiation bursts may have been a selection factor. For example, if water washes through a reef of fissionable material, it could slowly concentrate this material to a critical mass that would not explode like a nuclear bomb, but could cause a plume of radioactivity similar to a nuclear reactor mishap. Animals that could sense radiation and move away, and/or have better-shielded or more structurally-robust genetic material, might be selected in for survival in such environments.
More robust genetic material also means less mutability, which disfavors complexity and would tend to cause the organism to remain genetically unchanged over the ages, as may be the case with the cockroach.
Even if such circumstances did not occur on this planet, you could postulate a planet on which they did. A planet rich in fissionable materials would probably also be rich in heavy elements suitable for shielding, so a radiation-aware organism might not only flee the radiation it senses, but could also seek, create or incorporate appropriate shelter. You could also postulate organisms that might derive their energy needs from controlled exposure to the fission process.
Animals with radiation awareness could be useful things to have around :-)
19 February 2006
Marketoid Barf-Fest, Part 1
Can I really sell stuff by asking dumb-ass questions for which the answer is always "Yes"?
I suppose the idea is that any "Yes" makes you feel good, etc. So let's try a few...
Can I really be infected within minutes if I just re-install pre-SP2 Windows XP by booting the CD and following the prompts?
Can just re-installing Windows really fail to fix my problems, while making my data appear to disappear?
Well, readers; is it working for you? Did you feel the earth move, and are you rushing out to buy stuff? Me neither.
I suppose the idea is that any "Yes" makes you feel good, etc. So let's try a few...
Can I really be infected within minutes if I just re-install pre-SP2 Windows XP by booting the CD and following the prompts?
Can just re-installing Windows really fail to fix my problems, while making my data appear to disappear?
Well, readers; is it working for you? Did you feel the earth move, and are you rushing out to buy stuff? Me neither.
14 January 2006
Bad File System or Incompetent OS?
"Use NTFS instead of FAT32, it's a better file system", goes the knee-jerk. NTFS is a better file system, but not in a sense that every norm in FAT32 has been improved; depending on how you use your PC and what infrastructure you have, FATxx may still be a better choice. All that is discussed here.
The assertion is often made that NTFS is "more robust" than FAT32, and that FAT32 "always has errors and gets corrupted" in XP. There are two apparent aspects to this; NTFS's transaction rollback capability, and inherent file system robustness. But there's a third, hidden factor as well.
Transaction Rollback
A blind spot is that the only thing expected to go wrong with file systems, is the interruption of sane write operations. All of the strategies and defaults in Scandisk and ChkDsk/AutoChk (and automated handling of "dirty" file system states) are based on this.
When sane file system writes are interrupted in FATxx, you are either left with a length mismatch between FAT chaining and directory entry (in which case the file data will be truncated) or a FAT chain that has no directory entry (in which case the file data may be recovered as a "lost cluster chain"). It's very rare that the FAT will be mismatched (the benign "mismatched FAT", and the only case where blind one-FAT-over-the-other is safe). After repair, you are left with a sane file system, and the data you were writing is flagged and logged as damaged (therefore repaired) and you know you should treat that data with suspicion.
When sane file system writes are interrupted in NTFS, transaction rollback "undoes" the operation. This assures file system sanity without having to "repair" it (in essence, the repair is automated and hidden from you). It also means that all data that was being written is smoothly and seamlessly lost. The small print in the articles on Transaction Rollback make it clear that only the metadata is preserved; "user data" (i.e. the actual content of the file) is not preserved.
Inherent Robustness
What happens when other things cause file system corruption, such as insane writes to disk structures, arbitrary sectors written to the wrong addresses, physically unrecoverable bad sectors, or malicious malware payloads a la Witty? That is the true test of file system robustness, and survivability pivots on four things; redundant information, documentation, OS accessibility, and data recovery tools.
FATxx redundancy includes the comparison of file data length as defined in directory entry vs. FAT cluster chaining, and the dual FATs to protect chaining information that cannot be deduced should this information be lost. Redundancy is required not only to guide repair, but to detect errors in the first place - each cluster address should appear only once within the FAT and collected directory entries, i.e. each cluster should be part of the chain of one file or the start of the data of one file, so it is easy to detect anomalies such as cross-links and lost cluster chains.
NTFS redundancy isn't quite as clear-cut, extending as it does to duplication of the first 5 records in the Master File Table (MFT). It's not clear what redundancy there is for anything else, nor are there tools that can hardness this in a user-controlled way.
FATxx is a well-documented standard, and there are plenty of repair tools available for it. It can be read from a large number of OSs, many of which are safe for at-risk volumes, i.e. they will not initiate writes to the at-risk volume of their own accord. Many OSs will tolerate an utterly deranged FATxx volume simply because unless you initiate an action on that volume, the OS will simply ignore it. Such OSs can be used to safely platform your recovery tools, which include interactively-controllable file system repair tools such as Scandisk.
NTFS is undocumented at the raw bytes level because it is proprietary and subject to change. This is an unavoidable side-effect of deploying OS features and security down into the file system (essential if such security is to be effective), but it does make it hard for tools vendors. There is no interactive NTFS repair tool such as Scandisk, and what data recovery tools there are, are mainly of the "trust me, I'll do it for you" kind. There's no equivalent of Norton DiskEdit, i.e. a raw sector editor with an understanding of NTFS structure.
More to the point, accessibility is fragile with NTFS. Almost all OSs depend on NTFS.SYS to access NTFS, whether these be XP (including Safe Command Only), the bootable XP CD (including Recovery Console), Bart PE CDR, MS WinPE, Linux that uses the "capture" approach to shelling NTFS.SYS, or SystemInternals' "Pro" (writable) feeware NTFS drivers for DOS mode and Win9x GUI.
This came to light when a particular NTFS volume started crashing NTFS.SYS with STOP 0x24 errors in every context tested (I didn't test Linux or feeware DOS/Win9x drivers). For starters, that makes ChkDsk impossible to run, washing out MS's advice to "run ChkDsk /F" to fix the issue, possible causes of which are sanguinely described as including "too many files" and "too much file system fragmentation".
The only access I could acquire was BING (www.bootitng.com) to test the file system as a side-effect of imaging it off and resizing it (it passes with no errors), and two DOS mode tactics; the LFN-unaware ReadNTFS utility that allows files and subtrees to be copied off, one at a time, and full LFN access by loading first an LFN TSR, then the freeware (read-only) NTFS TSR. Unfortunately, XCopy doesn't see LFNs via the LFN TSR, and Odi's LFN Tools don't work through drivers such as the NTFS TSR, so files had to be copied one directory level at a time.
These tools are described and linked to from here.
FATxx concentrates all "raw" file system structure at the front of the disk, making it possible to backup and drop in variations of this structure while leaving file contents undisturbed. For example, if the FATs are botched, you can drop in alternate FATs (i.e. using different repair strategies) and copy off the data under each. It also means the state of the file system can be snapshotted in quite a small footprint.
In contrast, NTFS sprawls its file system structure all over the place, mixed in with the data space. This may remove the performance impact of "back to base" head travel, but it means the whole volume has to be raw-imaged off to preserve the file system state. This is one of several compelling arguments in favor of small volumes, if planning for survivability.
OS Competence
From reading the above, one wonders if NTFS really is more survivable or robust that FATxx. One also wonders why NTFS advocates are having such bad mileage with FATxx, given there's little inherent in the file system structural design to account for this. The answer may lie here.
We know XP is incompetent in managing FAT32 volumes over 32G in size, in that it is unable to format them. If you do trick XP into formatting a volume larger than 32G as FAT32, it fails in the dirtiest, most destructive way possible; it begins the format (thus irreversibly clobbering whatever was there before), grinds away for ages, and then dies with an error when it gets to 32G. This standard of coding is so bad as to look like a deliberate attempt to create the impression that FATxx is inherently "bad".
But try this on a FATxx volume; run ChkDsk on it from an XP command prompt and see how long it takes, then right-click the volume and go Properties, Tools and "check the file system for errors" and note how long that takes. Yep, the second process is magically quick; so quick, it may not even have time to recalculate free space (count all FAT entries of zero) and compare that to the free space value cached in the FAT32 boot record.
Now test what this implies; deliberately hand-craft errors in a FATxx file system, do the right-click "check for errors", note that it finds none, then get out to DOS mode and do a Scandisk and see what that finds. Riiight... perhaps the reason FATxx "always has errors" in XP is because XP's tools are too brain-dead to fix them?
My strategy has always been to build on FATxx rather than NTFS, and retain a Win9x DOS mode as an alternate boot via Boot.ini - so when I want to check and fix file system errors, I use DOS mode Scandisk, rather than XP's AutoChk/ChkDsk (I suppress AutoChk). Maybe that's why I'm not seeing the "FATxx always has errors" problem? Unfortunately, DOS mode and Scandisk can't be trusted > 137G, so there's one more reason to prefer small volumes.
The assertion is often made that NTFS is "more robust" than FAT32, and that FAT32 "always has errors and gets corrupted" in XP. There are two apparent aspects to this; NTFS's transaction rollback capability, and inherent file system robustness. But there's a third, hidden factor as well.
Transaction Rollback
A blind spot is that the only thing expected to go wrong with file systems, is the interruption of sane write operations. All of the strategies and defaults in Scandisk and ChkDsk/AutoChk (and automated handling of "dirty" file system states) are based on this.
When sane file system writes are interrupted in FATxx, you are either left with a length mismatch between FAT chaining and directory entry (in which case the file data will be truncated) or a FAT chain that has no directory entry (in which case the file data may be recovered as a "lost cluster chain"). It's very rare that the FAT will be mismatched (the benign "mismatched FAT", and the only case where blind one-FAT-over-the-other is safe). After repair, you are left with a sane file system, and the data you were writing is flagged and logged as damaged (therefore repaired) and you know you should treat that data with suspicion.
When sane file system writes are interrupted in NTFS, transaction rollback "undoes" the operation. This assures file system sanity without having to "repair" it (in essence, the repair is automated and hidden from you). It also means that all data that was being written is smoothly and seamlessly lost. The small print in the articles on Transaction Rollback make it clear that only the metadata is preserved; "user data" (i.e. the actual content of the file) is not preserved.
Inherent Robustness
What happens when other things cause file system corruption, such as insane writes to disk structures, arbitrary sectors written to the wrong addresses, physically unrecoverable bad sectors, or malicious malware payloads a la Witty? That is the true test of file system robustness, and survivability pivots on four things; redundant information, documentation, OS accessibility, and data recovery tools.
FATxx redundancy includes the comparison of file data length as defined in directory entry vs. FAT cluster chaining, and the dual FATs to protect chaining information that cannot be deduced should this information be lost. Redundancy is required not only to guide repair, but to detect errors in the first place - each cluster address should appear only once within the FAT and collected directory entries, i.e. each cluster should be part of the chain of one file or the start of the data of one file, so it is easy to detect anomalies such as cross-links and lost cluster chains.
NTFS redundancy isn't quite as clear-cut, extending as it does to duplication of the first 5 records in the Master File Table (MFT). It's not clear what redundancy there is for anything else, nor are there tools that can hardness this in a user-controlled way.
FATxx is a well-documented standard, and there are plenty of repair tools available for it. It can be read from a large number of OSs, many of which are safe for at-risk volumes, i.e. they will not initiate writes to the at-risk volume of their own accord. Many OSs will tolerate an utterly deranged FATxx volume simply because unless you initiate an action on that volume, the OS will simply ignore it. Such OSs can be used to safely platform your recovery tools, which include interactively-controllable file system repair tools such as Scandisk.
NTFS is undocumented at the raw bytes level because it is proprietary and subject to change. This is an unavoidable side-effect of deploying OS features and security down into the file system (essential if such security is to be effective), but it does make it hard for tools vendors. There is no interactive NTFS repair tool such as Scandisk, and what data recovery tools there are, are mainly of the "trust me, I'll do it for you" kind. There's no equivalent of Norton DiskEdit, i.e. a raw sector editor with an understanding of NTFS structure.
More to the point, accessibility is fragile with NTFS. Almost all OSs depend on NTFS.SYS to access NTFS, whether these be XP (including Safe Command Only), the bootable XP CD (including Recovery Console), Bart PE CDR, MS WinPE, Linux that uses the "capture" approach to shelling NTFS.SYS, or SystemInternals' "Pro" (writable) feeware NTFS drivers for DOS mode and Win9x GUI.
This came to light when a particular NTFS volume started crashing NTFS.SYS with STOP 0x24 errors in every context tested (I didn't test Linux or feeware DOS/Win9x drivers). For starters, that makes ChkDsk impossible to run, washing out MS's advice to "run ChkDsk /F" to fix the issue, possible causes of which are sanguinely described as including "too many files" and "too much file system fragmentation".
The only access I could acquire was BING (www.bootitng.com) to test the file system as a side-effect of imaging it off and resizing it (it passes with no errors), and two DOS mode tactics; the LFN-unaware ReadNTFS utility that allows files and subtrees to be copied off, one at a time, and full LFN access by loading first an LFN TSR, then the freeware (read-only) NTFS TSR. Unfortunately, XCopy doesn't see LFNs via the LFN TSR, and Odi's LFN Tools don't work through drivers such as the NTFS TSR, so files had to be copied one directory level at a time.
These tools are described and linked to from here.
FATxx concentrates all "raw" file system structure at the front of the disk, making it possible to backup and drop in variations of this structure while leaving file contents undisturbed. For example, if the FATs are botched, you can drop in alternate FATs (i.e. using different repair strategies) and copy off the data under each. It also means the state of the file system can be snapshotted in quite a small footprint.
In contrast, NTFS sprawls its file system structure all over the place, mixed in with the data space. This may remove the performance impact of "back to base" head travel, but it means the whole volume has to be raw-imaged off to preserve the file system state. This is one of several compelling arguments in favor of small volumes, if planning for survivability.
OS Competence
From reading the above, one wonders if NTFS really is more survivable or robust that FATxx. One also wonders why NTFS advocates are having such bad mileage with FATxx, given there's little inherent in the file system structural design to account for this. The answer may lie here.
We know XP is incompetent in managing FAT32 volumes over 32G in size, in that it is unable to format them. If you do trick XP into formatting a volume larger than 32G as FAT32, it fails in the dirtiest, most destructive way possible; it begins the format (thus irreversibly clobbering whatever was there before), grinds away for ages, and then dies with an error when it gets to 32G. This standard of coding is so bad as to look like a deliberate attempt to create the impression that FATxx is inherently "bad".
But try this on a FATxx volume; run ChkDsk on it from an XP command prompt and see how long it takes, then right-click the volume and go Properties, Tools and "check the file system for errors" and note how long that takes. Yep, the second process is magically quick; so quick, it may not even have time to recalculate free space (count all FAT entries of zero) and compare that to the free space value cached in the FAT32 boot record.
Now test what this implies; deliberately hand-craft errors in a FATxx file system, do the right-click "check for errors", note that it finds none, then get out to DOS mode and do a Scandisk and see what that finds. Riiight... perhaps the reason FATxx "always has errors" in XP is because XP's tools are too brain-dead to fix them?
My strategy has always been to build on FATxx rather than NTFS, and retain a Win9x DOS mode as an alternate boot via Boot.ini - so when I want to check and fix file system errors, I use DOS mode Scandisk, rather than XP's AutoChk/ChkDsk (I suppress AutoChk). Maybe that's why I'm not seeing the "FATxx always has errors" problem? Unfortunately, DOS mode and Scandisk can't be trusted > 137G, so there's one more reason to prefer small volumes.
02 January 2006
WMF Exposes Bad Design
Crisis of the day; an unpatched vulnerability that allows malformed .WMF files to run as raw code, i.e. the classic "insane code" scenario that can explode anywhere, any time.
See elsewhere for evolving details of the defect, workarounds, vulnerability detection tools and so on. DEP is mooted as a protection, but I am not certain that all exploits will trip DEP; in any case, DEP's only fully effective on XP SP2 systems with DEP-capable processors, and where other software issues haven't required it to be disabled.
Code defects can arise anywhere, any time, regardless of what the code is supposed to be doing by design. The hallmark of the pure code defect is that the results bear no relation to design intentions, and can thus be considered insane.
So it follows that any part of the OS may need to be amputated (or bulkheaded off) at any time.
When the problem is an inessential associated file type, this should be as easy as redirecting that file type away from the defective engine that processes it - and this is where bad OS design comes to light.
File associations are not simply there to "make things work"; they are also the point at which the user exerts control. The problem is, the OS often blurs file association linkages based on information hidden from the user, such as header information embedded in the file's data. If anything, the trend is getting worse, with the OS sniffing file content and changing its management according to what it finds hidden there, even if this differs from the information on which the user judged the risk of "opening" the file.
This is unsafe design. Surely by 2005, it should be obvious to mistrust content that mis-represents its nature? Even when the risk significance is less obvious than a .PIF containing raw .EXE code, the fact that any file type can suddenly become hi-risk due to an exploitable code defect should imply that all file types should be "opened" only by the code appropriate for the file type claimed.
As it is, simply changing (or killing) the association for .WMF files may be ineffective, because if the OS is presented with a file of different file name extension and it recognises the content as WMF, it will pass it to the (defective) WMF handler.
The lesson here goes beyond fixing this particular defect, resolving to code better in future (again), and always swallowing patches as soon as the vendor releases them (a sore point in this case, as exploitation precedes patch). We should also ensure that file content is processed only as expected by the file name extension; any variance between this and the content should be considered hostile, and blocked as such.
See elsewhere for evolving details of the defect, workarounds, vulnerability detection tools and so on. DEP is mooted as a protection, but I am not certain that all exploits will trip DEP; in any case, DEP's only fully effective on XP SP2 systems with DEP-capable processors, and where other software issues haven't required it to be disabled.
Code defects can arise anywhere, any time, regardless of what the code is supposed to be doing by design. The hallmark of the pure code defect is that the results bear no relation to design intentions, and can thus be considered insane.
So it follows that any part of the OS may need to be amputated (or bulkheaded off) at any time.
When the problem is an inessential associated file type, this should be as easy as redirecting that file type away from the defective engine that processes it - and this is where bad OS design comes to light.
File associations are not simply there to "make things work"; they are also the point at which the user exerts control. The problem is, the OS often blurs file association linkages based on information hidden from the user, such as header information embedded in the file's data. If anything, the trend is getting worse, with the OS sniffing file content and changing its management according to what it finds hidden there, even if this differs from the information on which the user judged the risk of "opening" the file.
This is unsafe design. Surely by 2005, it should be obvious to mistrust content that mis-represents its nature? Even when the risk significance is less obvious than a .PIF containing raw .EXE code, the fact that any file type can suddenly become hi-risk due to an exploitable code defect should imply that all file types should be "opened" only by the code appropriate for the file type claimed.
As it is, simply changing (or killing) the association for .WMF files may be ineffective, because if the OS is presented with a file of different file name extension and it recognises the content as WMF, it will pass it to the (defective) WMF handler.
The lesson here goes beyond fixing this particular defect, resolving to code better in future (again), and always swallowing patches as soon as the vendor releases them (a sore point in this case, as exploitation precedes patch). We should also ensure that file content is processed only as expected by the file name extension; any variance between this and the content should be considered hostile, and blocked as such.
14 November 2005
So Long, Sony
Sony destroyed the last advantages legitimate content distribution had left.
Legitimate content usually costs more, and is often more restrictive about how it can be used; DRM is, after all, an artificial attempt to destroy the natural advantages of the digital age.
Plus it's more difficult to get. The media companies could have beaten peer-to-peer networks at their own game, by saying: "Here's the one place in the world where you can get direct access to every work by our artists that has ever been released, directly from a trusted, efficient server". But no, it's still "we own the rights to everything, but most of what you want we will not supply because we have decided to delete that from out catalog as not being financially worth our while".
So the only positive things left to say are: "Get your material from us, so you know you won't get attacked by viruses", or "Buy it from us, because it's the right thing to do".
Yet no virus writer has been able to escalate risk as severely as Sony, where something that is not even supposed to be a "computer disk" (thus representing the lowest expected risk possible) actually plants an open-for-exploit rootkit on the PC (which is about as high a risk as possible) - and then have the sheer arrogance to deliberately build that into mass-manufactured goods that folks pay for in good faith.
We aren't talking about some bad guy forging product, or intercepting and tampering with it, e.g. by injecting poison into over-the-counter headache pills and putting them back on the shelf. This is actually built into the product at the factory. We've never seen that level of evil before, when it comes to maliciously exploiting users via Social Engineering.
And Sony has no remorse; the response has been "What's the big deal? Most consumers wouldn't know what a 'rootkit' is, anyway". Well, folks who bought poisoned headache pills may not understand the biochemistry involved, but they do feel the pain.
So - if you want to listen to music without risking exposure to malware, buying a legitimate CD may now be the last thing you want to do. And when it comes to doing the "right thing", does it do to reward the most evil exploitation of trust consumerland has seen yet?
"Sony Is Not An IT Company"
I've watched some IT folks attempt to defend what Sony has done in various ways, and one of these is what I call the "forgive them, for they know not what they do" argument. But there's no good results down that road - either Sony is indeed ignorant of IT principles (in which case, should they be considered fit to stealth code into "data" products?). or they know exactly what they are doing, and thus clearly demonstrated they are unfit to be trusted.
Sony releases all manner of things under thier brand, including IT products. If they are happy to leverage the brand association, then they can't disassociate themselves from the fallout.
Would you buy or resell Sony DVD writers and other optical drives? Would you trust the software bundled with Sony digital cameras, media playing devices, etc.?
Trusted Computing can fail at any one of several layers, and it's no good worrying about the deepest of these (program code, hardware, etc.) if the topmost layer is blown away. Trusted Computing is going to be designed and built by entities who have proven they cannot be trusted, out of materials (code, even hardware) that are notoriously prone to insane behavior.
When audio CDs drop rootkits on PCs, "documents" auto-run macros, and JPEG image files run as raw code through some deep code bug, you don't have to look far to understand why we are scratching at the door to escape whenever we hear the term "Trusted Computing".
Moral: Never code anything bigger than your own head.
Calling All Activists
This Sony rootkit issue is a big storm in our little teacup of Information Technology, but every client I've spoken to, hasn't even heard about it - and this includes several folks who are normally quite socio-politically aware.
All too often, the activists leave the geeky stuff to us - and as techno-geeks, we leave the politics to the activists, politicians and lawyers. Then when they do try to legislate or regulate our technological world, we smugly point out how poorly they understand this world, and thus imply they are unfit to do so. Result: The engine's running full speed, but there's no hand on the rudder - is it any wonder the ship gets hijacked?
All over the world, societies and nations have balanced the need for creditors to recover debts, with the rights of the indebited. We generally do not allow creditors to send their goons to smash into your house and search it for what they accuse the debtor of having appropriated.
IT corporations are used to writing their own laws (EULAs and warranties basically exist to trump common law principles), and what Sony has done is a natural extension of this - though the sheer scale and arrogance beggars belief. They haven't released a virus that infects existing content, but built it into the product, and this malware runs roughshod over whatever laws might apply wherever that content goes.
And no-one has questioned their right to do this, instead muttering only about the methods involved. But step back and look at the big picture; Sony is "defending" a US$20 transaction, at the cost of your computer installation that's worth... what? The value's potentially unbounded; the PC may be worth US$500, but what you do on it could be worth far more; Sony doesn't care about the details, and would expect to be absolved of responsibility for any consequent damage.
The rights you save, may be your own.
Legitimate content usually costs more, and is often more restrictive about how it can be used; DRM is, after all, an artificial attempt to destroy the natural advantages of the digital age.
Plus it's more difficult to get. The media companies could have beaten peer-to-peer networks at their own game, by saying: "Here's the one place in the world where you can get direct access to every work by our artists that has ever been released, directly from a trusted, efficient server". But no, it's still "we own the rights to everything, but most of what you want we will not supply because we have decided to delete that from out catalog as not being financially worth our while".
So the only positive things left to say are: "Get your material from us, so you know you won't get attacked by viruses", or "Buy it from us, because it's the right thing to do".
Yet no virus writer has been able to escalate risk as severely as Sony, where something that is not even supposed to be a "computer disk" (thus representing the lowest expected risk possible) actually plants an open-for-exploit rootkit on the PC (which is about as high a risk as possible) - and then have the sheer arrogance to deliberately build that into mass-manufactured goods that folks pay for in good faith.
We aren't talking about some bad guy forging product, or intercepting and tampering with it, e.g. by injecting poison into over-the-counter headache pills and putting them back on the shelf. This is actually built into the product at the factory. We've never seen that level of evil before, when it comes to maliciously exploiting users via Social Engineering.
And Sony has no remorse; the response has been "What's the big deal? Most consumers wouldn't know what a 'rootkit' is, anyway". Well, folks who bought poisoned headache pills may not understand the biochemistry involved, but they do feel the pain.
So - if you want to listen to music without risking exposure to malware, buying a legitimate CD may now be the last thing you want to do. And when it comes to doing the "right thing", does it do to reward the most evil exploitation of trust consumerland has seen yet?
"Sony Is Not An IT Company"
I've watched some IT folks attempt to defend what Sony has done in various ways, and one of these is what I call the "forgive them, for they know not what they do" argument. But there's no good results down that road - either Sony is indeed ignorant of IT principles (in which case, should they be considered fit to stealth code into "data" products?). or they know exactly what they are doing, and thus clearly demonstrated they are unfit to be trusted.
Sony releases all manner of things under thier brand, including IT products. If they are happy to leverage the brand association, then they can't disassociate themselves from the fallout.
Would you buy or resell Sony DVD writers and other optical drives? Would you trust the software bundled with Sony digital cameras, media playing devices, etc.?
Trusted Computing can fail at any one of several layers, and it's no good worrying about the deepest of these (program code, hardware, etc.) if the topmost layer is blown away. Trusted Computing is going to be designed and built by entities who have proven they cannot be trusted, out of materials (code, even hardware) that are notoriously prone to insane behavior.
When audio CDs drop rootkits on PCs, "documents" auto-run macros, and JPEG image files run as raw code through some deep code bug, you don't have to look far to understand why we are scratching at the door to escape whenever we hear the term "Trusted Computing".
Moral: Never code anything bigger than your own head.
Calling All Activists
This Sony rootkit issue is a big storm in our little teacup of Information Technology, but every client I've spoken to, hasn't even heard about it - and this includes several folks who are normally quite socio-politically aware.
All too often, the activists leave the geeky stuff to us - and as techno-geeks, we leave the politics to the activists, politicians and lawyers. Then when they do try to legislate or regulate our technological world, we smugly point out how poorly they understand this world, and thus imply they are unfit to do so. Result: The engine's running full speed, but there's no hand on the rudder - is it any wonder the ship gets hijacked?
All over the world, societies and nations have balanced the need for creditors to recover debts, with the rights of the indebited. We generally do not allow creditors to send their goons to smash into your house and search it for what they accuse the debtor of having appropriated.
IT corporations are used to writing their own laws (EULAs and warranties basically exist to trump common law principles), and what Sony has done is a natural extension of this - though the sheer scale and arrogance beggars belief. They haven't released a virus that infects existing content, but built it into the product, and this malware runs roughshod over whatever laws might apply wherever that content goes.
And no-one has questioned their right to do this, instead muttering only about the methods involved. But step back and look at the big picture; Sony is "defending" a US$20 transaction, at the cost of your computer installation that's worth... what? The value's potentially unbounded; the PC may be worth US$500, but what you do on it could be worth far more; Sony doesn't care about the details, and would expect to be absolved of responsibility for any consequent damage.
The rights you save, may be your own.
Subscribe to:
Comments (Atom)
