28 August 2007

Design vs. Code Errors

Technorati tags: ,

When Microsoft finds a code error, it generally fixes this fairly promptly.

In contrast, design errors generally remain unfixed for several generations of products; sometimes years, sometimes decades.  Typically even when addressed, the original design will be defended as "not an error" or "works as designed".

Old ideas that don't fit

As an example of bad design that has persisted from the original Windows 95 through to Vista, consider the in appropriateness of Format on the top layer of the Drive context menu.

The logic is old, and still true; hard drives are disks, and formatting is something you do to disks, therefore etc. 

But around this unchanged truism, other things have changed. 

We now have more things we can do to disks, many of which should be done more often than they are; backup, check for errors, defrag.  Because these are "new" (as at Windows 95), they are tucked several clicks deeper in the UI, e.g. Properties, Tools.

Also, the word "Format" has some to mean different things to users.  In 1985, users would routinely buy blank diskettes that had to be formatted before use, and so the immediate meaning of the word "format" was "to make a disk empty by destroying all existing contents".  In 2007, users store things on USB sticks or optical disks, none of which have to be formatted (unless you use packet writing on RW disks) and the immediate meaning of the word "format" is "to make pretty", as in "auto-format this Word document" and "richly-formatted text".

The goal of software is to abstract the system towards the user's understanding of what they want to do.  In keeping with this, "hard drives" have taken on a different conceptual meaning, away from the system reality of disks, towards an abstracted notion of "where things go".  In particular, modern Windows tends to gloss over paths, directories etc. with conceptual locations such as "the desktop", "documents" etc. and the use of Search to find things vs. formal file system navigation across disks and directories.

New things that break old truths

When a risk doesn't arise due to hard scopes, one doesn't have to consider it.  For example, if you build a house with a mountain as your back wall, you don't have to think about burglar-proofing the back wall.  For example, if your LAN is cable-only in a physically-secured building, you have less worries about intrusion than if you'd added WiFi to the mix.

When a risk doesn't arise because a previous team anticipated and definitively fixed it, future teams may be oblivious to it as a risk.  As Windows is decades old, and few programmers stay at the rock face for decades without being promoted to management or leaving, there's a real risk that today's teams will act as "new brooms", sweeping the platform into old risks.

In many of these cases, the risks were immediately obvious to me:

  • \Autorun.inf processing of hard drive volumes
  • Auto-running macros in "documents"
  • Active content in web pages

In some cases, I missed the risk until the first exploit:

  • Unfamiliar .ext and scripting languages

But it generally takes none to one exploit example for me to get the message, and take steps to wall out that risk.  Alas, Microsoft keeps digging for generations:

Auto-binding File and Print Sharing to DUN in Win9x, the way WiFi has been rolled out, dropping "network client" NT into consumerland as XP, hidden admin shares, exposing LSASS and RPC without firewall protection, encouraging path-agnostic file selection via Search... all of these are examples of changes that increase exposure to old risks, and/or new brooms that undermine definitive solutions as delivered by previous teams. 

For example, the folks who designed DOS were careful to ensure that the type of file would always be immediately visible via the file name extension, limiting code types to .COM, .EXE and .BAT, and they were careful to ensure every file had a unique filespec, so that you'd not "open" the wrong one.

These measures basically solved most malware file-spoofing problems, but subsequent teams hide file name extensions, apply poor file type discipline, dumb "run" vs. "view"/"edit" down to the meaningless "open", act on hidden file type info without checking this matches what the user saw, and encourage searching for files that may pull up the wrong filespec.

Avoiding bad design

How would I prevent bad designs reaching the market, and thus creating an installed vendor/user base that create problems when the design is changed?

  • Keep core safety axioms in mind
  • Maintain old/new team continuity
  • Reassess logic of existing practices
  • Don't force pro-IT mindset on consumers
  • Assume bad intent for any external material
  • Make no assumptions of vendor trustworthiness

The classic safe hex rules...

  • Nothing runs on this system unless I choose to run it
  • I will assess and decide on all content before running it

...seem old and restrictive, but breaking these underlies most malware exploits.

27 August 2007

The Word Is Not The World

We can't use language to describe "the all".

Stated as baldly, this looks rather Zen, doesn't it?

The point being that language goes about defining particulars, i.e. "is this, is not that", and thus chips its way away from "the all".

In number theory terms, it's the difference between infinity and very large; of a limit, and test values that tend towards that limit.

The Waking Hour

17 August 2007

Norton Life Sentence

Technorati tags: ,

This post is about Packard Bell, Norton Antivirus, Norton Internet Security and OEM bundling.  For many readers, those four are all "yuk" items already...

Formal maintenance

Every "WTF" (i.e. ill-defined complaints, or just in an unknown state) PC that comes in, gets the formal treatment; 24 hours of MemTest86 with substituted boot CDR to detect spontaneous reboots, Bart boot HD Tune, and Bart booted formal malware scans.

This laptop cannot perform the RAM test because it keeps switching itself off, presumably because it is "idle" (no keyboard, HD, CD, LAN, mouse etc. interrupts).  CMOS Setup shows no facility to manage such behavior, which is disabled in Windows already.  Strike 1, Packard Bell.

The hard drive and file system are fine, and absolutely no malware at all were found on multiple formal av and anti-"spyware" scans, nor in four anti-"spyware" scans done in Safe Cmd.  Spybot did note that the three Windows Security Center alerts were overridden, and this was later confirmed to be a Norton effect.

Specs and software

This is a fairly high-spec laptop; Mobile Celeron at 1.5GHz, 1G RAM (!), XP Pro SP2, but puny 45G hard drive with 4G stolen for the OEM's "special backup" material.  The date stamp on the Windows base directory is 17 March 2006, which matches that of the SVI and "Program Files" directories too.

It has Norton Internet Security 7.0.6.17 OEM(90) and Norton Antivirus 2004 10.0.1.13 OEM(90).  That's from the Help in these products; the same Help describes using Add/Remove to uninstall them. 

Attempted uninstall

Both programs are definitely present and running; in fact, one gets nags every few minutes about antivirus being out of date, and firewall being disabled.  A check confirms both to be true; neither Norton nor XP firewall is enabled, and Norton's subscription has expired.

However, Add/Remove shows no Norton entries other than Live Update.  In fact, the expected slew of OEM bundleware are not there.  A lethally-ancient Sun Java JRE 1.4.xx was found and uninstalled.

Start Menu shows an "Internet and security" flyout with icons for Norton Antivirus and Internet Security.  No icons to uninstall these products from there.

What I did find in a "Packard Bell Support" Start Menu flyout, was a Smart Restore center, from which bundleware could be highlighted and installed or uninstalled.  There was an alert to disable Norton's protection before doing this (more on that later), but either way, clicking Uninstall did nothing (no visible UI effect) and clicking OK after that, appeared to install Norton 2004 again.

To be continued...

15 August 2007

Duplicate User Accounts

Technorati tags:

On Sat, 11 Aug 2007 20:58:01 -0700, SteveS

>My laptop is from Fujitsu and it came with OmniPass software (the
>fingerprint scanner software to log in).  I saw other postings elsewhere
>about it duplicating users on the login screen.  I uninstalled the software,
>rebooted - problem fixed (no more duplicate users).  I reinstalled the
>software, rebooted, the duplicate users did not show up.  I think it stems
>from the upgrade I did from Home Premium to Ultimate and had that software
>installed. 

Yes; any "repair install" of XP will prompt you to create new user accounts even though you already have user accounts, and registry settings that clearly indicate these accounts are in use.

If you then enter the same name(s) as existing accounts, then new accounts are created with the same name.

Vista may avoid this conundrum, but fall into others.


Behind the scenes, the real names are not the same, because the real names are something quite different to what Windows shows you.  Messy, but key to the ability of preserving continuity while allowing you to change the account name after it's created.

Specifically, you encounter not one, nor two, but three name sets:
  - the "real" unique identifier, of the form S-n-n-nn-nnn...
  - the name of the account base folder in Users or D&S
  - the name as seen at logon on when managing users

In the case of account duplication in XP, you will have:
  - unique and unrelated S-n-n-nn-nnnn... identifiers
  - old Name and new Name.PCName account folders
  - the same name at logon and account management

The risks of deleting the wrong material should be obvious.

Public Conversations

Malware: Avoid, Clean, or Rebuild?

Technorati tags:

On Sun, 12 Aug 2007 09:58:03 -0700, MrSlartybartfast

>Yes, creating an image of a hard drive which has malware would include the
>malware in the image.  When copying this image back to the hard drive, the
>malware would also be copied back resulting in net gain of zero.

This is why "just backup!" (as glibly stated) is as useless as "just don't get viruses!" or "if you get infected, clean the virus!" etc.

All of these approaches work, but have complexity within them that make for YMMV results.  The complexity is similar across all three contexts; how one scopes out the bad guys.  The mechanics of meeting that inescapable challenge vary between the three "solutions".

>When I reinstall Windows, I reinstall off the original DVD which has
>no malware, unless you call Windows itself malware :)

This is using time as the great X-axis, i.e. the OS code base is as old as possible, therefore excludes the malware.  And so, the PC is known to be clean.

But it also lacks every code patch needed to keep it that way, in the face of direct exploits a la Lovesan or Sasser etc. and to patch those, you'd have to expose this unpatched PC to the Internet.

It's also bereft of any applications and data.  Presumably once can do the same with applications and drivers as with the OS; install known-good baseline code from CDs and then patch these online, or re-download apps and drivers from the 'net.

There's also no data, and another cruch comes here, because you probably don't want a data set that's certain to be too old to be infected; you want your most recent backup, which is the one most likely to be malware-tainted.  How to scope data from malware?

Even though MS pushes "just" wipe and rebuild as the malware panacea, they undermine these poiunts of failure:
  - they generally don't ship replacement code on CDs or DVDs
  - they don't attempt to separate data, code and incoming material

The first has improved, what with XP SP2 being released as a CD, and with XP SP2 defaulting to firewall on.  

There's little or no progess on the second, though; still no clearly visible distinction between data and code, still no type discipline so malware can sprawl across file types and spoof the user and OS into trusting these, incoming material is still hidden in mail stores and mixed with "documents" etc. 

In Vista, just what is backed up and what is not is even more opaque, as there's little or no scoping by location at all.

>If the malware is on drive D:\ then it possibly could be reactivated on to
>drive C:\.  You normally need to access the files on D:\ to reactivate the
>malware.

For values of "you" that includes the OS as a player.  Even with a wipe-and-rebuild that ensures no registry pointers to code on D:, there can still be code autorun from D: via Desktop.ini, \Autorun.inf, or the exploitation of any internal surfaces.

Such surfaces may present themselves to the material:
  - when you do nothing at all, e.g. indexers, thumbnailers etc.
  - when you "list" files in "folders"
  - when a file name is displayed

>No antivirus is perfect either, antivirus programs can often miss finding
>some malware.  I tend to find antivirus programs clunky and annoying and
>prefer not to use them.

I use them, as I think most users do.  If you "don't need" an av, then clearly you have solved the "don't get viruses" problem, and the contexts of "clean the virus" and "rebuild and restore data" don't arise.  If they do arise, you were wong in thinking "don't get viruses" was solved, and maybe you should rethink "I don't need an av" (while I do agree that av will miss things).

Your nice freshly-built PC has no av, or an av installed from CD that has an update status far worse than whatever was in effect when you were infected.  To update the av, you have to take this clean, unpatched, un-protected-by-av system online...

>On my D:\ I compress my files individually which makes it hard for malware
>to emerge. 

That helps.  It also helps in av can traverse this compression for the on-demand scans you'd want to do between rebuilding C: and installing and updating av, and doing anythiing on D: or restoring "data".

>It is a painful process and takes a few hours so I do not do this very often.

I should hope not; it's "last resort".  If you have no confidence in the ability to detect or avoid malware, do you do this just when convenient, or whenever you "think you might be infected", or do you do it every X days so attackers have "only" X days in which they can harvest whatever they can grab off your PC?

>I  do find this much easier than trying to live with an antivirus
>program installed.  My choice is not for everyone

It might have been a best-fit in the DOS era, when "don't get viruses" was as easy as "boot C: before A: and don't run .EXE, .COM and .BAT files".  By now, a single resident av poses little or no system impact, whereas the wipe-and-rebuild process is a PITA.

Frankly, doing a wipe-and-rebuild every now and then on a PC that's probably clean anyway, will increase the risks of infection.

Do the maths; you either get infected so often that the risks of falling back to unpatched code hardly makes things worse, in which case whatever you (blindly) do is equally useless, or your approach works so well that falling back to unpatched code is your single biggest risk of infection, and to improve things, you should stop doing that.  If you have no ability to tell whether you are or have ever been infected, you can't distingusish between these states.

>as I said before I have no valuable information stored on
>my PC, I do not own a credit card and do not use internet
>banking.  If I have malware then I can live with it.

Most of us want better results than that, and generally attain them.

Why are we reading this advice again?

>The AUMHA forum you linked to as a recommendation for Nanoscan and Totalscan
>does nothing for me, it is hardly a review.  Panda Software is well known, so
>this is not one of the fake virus scans which is on the web.  Out of
>curiosity I started to run it anyway, I did not continue since I do not yet
>fully understand the software and am not prepared to install the files on my
>PC.  You may use this if you wish but it is not for me.

I agree with you there, especially if you suspect the PC is infected.  How do you know the site you reached, is not a malware look-alike that resident malware has spoofed you to?  Is it really a good idea to...
  - disable resident av
  - run Internet Explorer in admin mode so as to drop protection
  - say "yes" to all ActiveX etc. prompts
  - allow the site to drop and run code
  - stay online while this code "scans" all your files
...as the advice at such sites generally suggests?

>The bots which harvest email addresses off the internet are just that, bots.
> They scour the entire internet, not just microsoft newsgroups.  To be safe,
>never use your real name, never give your address, phone number or contact
>details, create temporary email accounts to use to sign up to forums and
>newsgroups,

Bots are unbounded, because:
  - they can update themselves
  - they facilitate unbounded interaction from external entities

Those external entities may be other bots or humans.  In essence, an active bot dissolves confidence in the distinction between "this system" and "the Internet" (or more more accurately, "the infosphere", as local attacks via WiFi may also be facilitated).

Public Conversations

14 August 2007

New User Account Duhfaults

From...

http://www.spywarepoint.com/forums/t26963-p7-microsoft-zero-day-security-holes-being-exploited.html

On Thu, 28 Sep 2006 21:24:32 -0600, Dan wrote:
>cquirke (MVP Windows shell/user) wrote:


>> Defense in depth means planning for how you get your system back; you
>> don't just faint in shock and horror that you're owned, and destroy
>> the whole system as the only way to kill the invader.


>> It's absolutely pathetic to have to tell posters "well, maybe you have
>> 'difficult' (i.e., compitently-written) malware; there's nothing you
>> can do, 'just' wipe and re-install" because our toolkit is bare.


>The school computers (XP Pro. ones -- the school also has 98SE
>computers) where I work were all configured by someone who did
>not know what they were doing. They are have the remote assistance
>boxes checked and that is like saying to everyone "come on in to this
>machine and welcome to the party" This setting is just asking for
>trouble and yet the person or people who originally set up these
>machines configured them in this manner.


All your setup dudes did wrong was to install the OS while leaving MS duhfaults in place. By duhfault, XP will:
- full-share everything on all HDs to networks (Pro, non-null pwds)
- perform no "strength tests" on account passwords (see above)
- disallow Recovery Console from accessing HDs other than C:
- disallow Recovery Console from copying files off C:
- wave numerous services e.g. RPC, LSASS at the Internet
- do so with no firewall protection (fixed in SP2)
- allow software to disable firewall
- automatically restart on all system errors, even during boot
- automatically restart on RPC service failures
- hide files, file name extensions and full directory paths
- always apply the above lethal defaults in Safe Mode
- facilitate multiple integration points into Safe Mode
- allow dangerous file types (.EXE, etc.) to set their own icons
- allow hidden content to override visible file type cues
- dump incoming messenger attachments in your data set
- dump IE downloads in your data set
- autorun code on CDs, DVDs, USB storage and HD volumes
- allow Remote Desktop and Remote Assistance through firewall
- allow unsecured WiFi
- automatically join previously-accepted WiFi networks
- waste huge space on per-user basis for IE cache
- duplicate most of the above on a per-account basis
- provide no way to override defaults in new account prototype

Every time one "just" reinstalls Windows (especially, but not always only, if one formats and starts over), many or all of the above settings will fall back to default again. Couple that with a loss of patches, and you can see why folks who "just" format and re-install, end up repeating this process on a regular basis.

Also, every time a new user account is created, all per-account settings start off with MS defaults and you have to re-apply your settings all over again. If you limit the account rights, as we are urged to do, then often these settings lip back to MS defaults and remain there - so I avoid multiple and limited user accounts altogether, and prefer to impose my own safety settings.

>-- Risk Management is the clue that asks:

"Why do I keep open buckets of petrol next to all the
ashtrays in the lounge, when I don't even have a car?"
>----------------------- ------ ---- --- -- - - - -

Public Conversations

Free Users Need Control!

Technorati tags: , , ,

From...

http://www.spywarepoint.com/forums/t26963-p7-microsoft-zero-day-security-holes-being-exploited.html

On Tue, 26 Sep 2006 07:46:22 -0400, "karl levinson, mvp"

>All operating systems do that. They are designed to launch code at boot
>time by reading registry values, text files, etc. Because those registry
>values are protected from unauthorized access by permissions, someone would
>have to already own your system to modify those values, wouldn't they?


Sure, but the wrong entities come to own systems all the time. Defense in depth means planning for how you get your system back; you don't just faint in shock and horror that you're owned, and destroy the whole system as the only way to kill the invader.

It's tougher for pro-IT, because they've long been tempted into breaking the rule about never letting anything trump the user at the keyboard. By now, they need remote access and admin, as well as automation that can be slid past the user who is not supposed to have the power to block it, in terms of the business structure.

But the rest of us don't have to be crippled by pro-IT's addiction to central and remote administration, any more than a peacetime urban motorist needs an 88mm cannon in a roof-top turret. We need to be empowered to physically get into our systems, and identify and rip out every automated or remotely-intruded PoS that's got into the system.

It's absolutely pathetic to have to tell posters "well, maybe you have 'difficult' (i.e., compitently-written) malware; there's nothing you can do, 'just' wipe and re-install" because our toolkit is bare.

Public Conversations

On User Rights, Safe Mode etc.

Edited for spelling; from...

http://www.spywarepoint.com/forums/t26963-p8-microsoft-zero-day-security-holes-being-exploited.html

On Fri, 29 Sep 2006 23:17:02 -0400, "Karl Levinson, mvp"
>"cquirke (MVP Windows shell/user)" wrote in


>>>All operating systems do that. They are designed to launch code at boot
>>>time by reading registry values, text files, etc. Because those registry
>>>values are protected from unauthorized access by permissions, someone
>>>would have to already own your system to modify those values, wouldn't they?


The weakness here is that anything that runs during the user's session is deemed to have been run with the user's intent, and gets the same rights as the user. This is an inappropriate assumption when there are so many by-design opportunities for code to run automatically, whether the user intended to do so or not.

>> Sure, but the wrong entities come to own systems all the time.


>My point is that this one example here doesn't seem to be a vulnerability if
>it requires another vulnerability in order to use it.


Many vulnerabilities fall into that category, often because the extra requirement was originally seen as sufficient mitigation.  Vulnerabilities don't have to facilitate primary entry to be significant; they may escalate access after entry, or allow the active malware state to persist across Windows sessions, etc.

>This isn't a case of combining two vulnerabilities to compromise a
>system; it's a case of one unnamed vulnerability being used to
>compromise a system, and then the attacker performs some other
>action, specifically changing registry values.


>If this is a vulnerability, then the ability of Administrators to create new
>user accounts, change passwords etc. would also be a vulnerability.


OK, now I'm with you, and I agree with you up to a point. I dunno where the earlier poster got the notion that Winlogin was there to act as his "ace in the hole" for controlling malware, as was implied.

>> Defense in depth means planning for how you get your system back; you
>> don't just faint in shock and horror that you're owned, and destroy
>> the whole system as the only way to kill the invader.


>That's a different issue than the one we were discussing. The statement
>was, winlogon using registry values to execute code at boot time is a
>vulnerability. I'm arguing that it is not.


I agree with you that it is not - the problem is the difficulty that the user faces when trying to regain control over malware that is using Winlogin and similar integration points.

The safety defect is that:
- these integration points are also effective in Safe Mode
- there is no maintenance OS from which they can be managed

We're told we don't need a HD-independent mOS because we have Safe Mode, ignoring the possibility that Safe Mode's core code may itself be infected. Playing along with that assertion, we'd expect Safe Mode to disable any 3rd-party integration, and would provide a UI through which these integration points can be managed.

But this is not the case - the safety defect is that once software is permitted to run on the system, the user lacks the tools to regain control from that software. Couple that with the Windows propensity to auto-run material either be design or via defects, and you have what is one of the most common PC management crises around.

>Besides, it's a relatively accepted truism that once an attacker has root,
>system or administrator privileges on any OS, it is fairly futile to try to
>restrict what actions s/he can perform. Anything a good administrator can
>do, a bad administrator can undo.


That's a safety flaw right there.

You're prolly thinking from the pro-IT perspective, where users are literally wage-slaves - the PC is owned by someone else, the time the user spends on the PC is owned by someone else, and that someone else expects to override user control over the system.

So we have the notion of "administrators" vs. "users". Then you'd need a single administrator to be able to manage multiple PCs without having to actually waddle over to all those keyboards - so you design in backdoors to facilitate administration via the network.

Which is fine - in the un-free world of mass business computing.

But the home user owns their PCs, and there is no-one else who should have the right to usurp that control. (Even) creditors and police do not have the right to break in, search, or seize within the user's home.

So what happens when an OS designed for wage-slavery is dropped into free homes as-is? Who is the notional "administrator"? Why is the Internet treated as if it were a closed and professionally-secured network? There's no "good administrators" and "bad administrators" here; just the person at the keyboard who should have full control over the system, and other nebulous entities on the Internet who should have zero control over the system.

Whatever some automated process or network visitation has done to a system, the home user at the keyboard should be able to undo.

Windows XP Home is simply not designed for free users to assert their rights of ownership, and that's a problem deeper than bits and bytes.

Public Conversations

On Win9x, SR, mOS II, etc.

Technorati tags: , , ,

Lifted from ...

http://www.spywarepoint.com/forums/t26963-p9-microsoft-zero-day-security-holes-being-exploited.html

On Sun, 01 Oct 2006 20:45:23 -0600, "Dan W." <spamyou@user.nec> wrote:
>karl levinson, mvp wrote:
>> "Dan W." <spamyou@user.nec> wrote in message


>> Fewer vulnerabilities are being reported for Windows 98 because Windows 98
>> is old and less commonly used, and vulns found for it get you less fame


More to the point is that vulnerable surfaces are less-often exposed to clickless attack - that's really what makes Win9x safer.

You can use an email app that displays only message text, without any inline content such as graphics etc. so that JPG and WMF exploit surfaces are less exposed. Couple that with an OS that doesn't wave RPC, LSASS etc. at the 'net and doesn't grope material underfoot (indexing) or when folders are viewed ("View As Web Page" and other metadata handlers) and you're getting somewhere.

For those who cannot subscribe to the "keep getting those patches, folks!" model, the above makes a lot of sense.

>> Didn't XP expand on and improve the system restore feature to a level not
>> currently in 98 or ME?


There's no SR in Win98, tho that was prolly when the first 3rd-party SR-like utilities started to appear. I remember two of these that seemed to inform WinME-era SR design.

No-one seemed that interested in adding these utilities, yet when the same functionality was built into WinME, it was touted as reason to switch to 'ME, and when this functionality fell over, users were often advised to "just" re-install to regain it. I doubt if we'd have advised users to "just" re-install the OS so that some 3rd-party add-on could work again.

XP's SR certainly is massively improved over WinME - and there's so little in common between them that it's rare one can offer SR management or tshooting advice that applies to both OSs equally.


I use SR in XP, and kill it at birth in WinME - that's the size of the difference, though a one-lunger (one big doomed C: installation may find the downsides of WinME's SR to less of an issue.

>>> about Microsoft and its early days to present time. The early Microsoft
>>> software engineers nicknamed it the Not There code since it did not have
>>> the type of maintenance operating system that Chris Quirke, MVP fondly
>>> talks about in regards to 98 Second Edition.


>> If the MOS being discussed for Win 98 is the system boot disk floppy, that
>> was a very basic MOS and it still works on Windows XP just as well as it
>> ever did on Windows 98. [Sure, you either have to format your disk as FAT,
>> or use a third party DOS NTFS driver.]


That was true, until we crossed the 137G limit (where DOS mode is no longer safe). It's a major reason why I still avoid NTFS... Bart works so well as a mOS for malware management that I seldom use DOS mode for that in XP systems, but data recovery and manual file system maintenance remain seriously limited for NTFS.

>> I think Chris really wants not that kind of MOS but a much bigger and
>> better one that has never existed.


Well, ever onward and all that ;-)

Bart is a bigger and better mOS, though it depends on how you build it (and yes, the effort of building it is larger than for DOS mode solutions). You can build a mOS from Bart that breaks various mOS safety rules (e.g. falls through to boot HD on unattended reset, automatically writes to HD, uses Explorer as shell and thus opens the risk of malware exploiting its surfaces, etc.).

I'm hoping MS WinPE 2.0, or the subset of this that is built into the Vista installation DVD, will match what Bart offers. Initial testing suggests it has the potential, though some mOS safety rules have been broken (e.g. fall-through to HD boot, requires visible Vista installation to work, etc.).

The RAM testing component is nice but breaks so many mOS safety rules so badly that I consider it unfit for use:
- spontaneous reset will reboot the HD
- HD is examined for Vista installation before you reach the test
- a large amount of UI code required to reach the test
- test drops the RAM tester on HD for next boot (!!)
- test logs results to the HD (!!)
- you have to boot full Vista off HD to see the results (!!!)

What this screams to me, is that MS still doesn't "get" what a mOS is, or how it should be designed. I can understand this, as MS WinPE was originally intended purely for setting up brand-new, presumed-good hardware with a fresh (destructive) OS installation.

By default, the RAM test does only one or a few passes; it takes under an hour or so - and thus is only going to detect pretty grossly-bad RAM. Grossly bad RAM is unlikely to run an entire GUI reliably, and can bit-lip any address to the wrong one, or any "read HD" call to a "write HD" call. The more code you run, the higher the risk of data corruption, and NO writes to HD should ever be done while the RAM is suspected to be bad (which is after all why we are testing it.

A mOS boot should never automatically chain to HD boot after a time out, because the reason you'd be using a mOS in the first place is because you daren't boot the HD. So when the mOS disk boots, the only safe thing to do is quickly reach a menu via a minimum of code, and stop there, with no-time-out fall-through.

It's tempting to fall-through to the RAM test as the only safe option, but that can undermine unattended RAM testing - if the system spontaneously resets during such testing, you need to know that, and it's not obvious if the reboot restarts the RAM test again.

Until RAM, physical HD and logical file system are known to be safe, and it's known that deleted material is not needed to be recovered, it is not safe to write to any HD. That means no page file, no swap, and no "drop and reboot" methods of restarting particular tests.

Until the HD's contents are known to be malware-free, it is unsafe to run any code off the HD. This goes beyond not booting the HD, or looking for drivers on the HD; it also means not automatically groping material there (e.g. when listing files in a folder) as doing so opens up internal surfaces of the mOS to exploitation risks.


Karl's right, tho... I'm already thinking beyond regaining what we lost when hardware (> 137G, USB, etc.) and NTFS broke the ability to use DOS mode as a mOS, to what a purpose-built mOS could offer.

For example, it could contain a generic file and redirected-registry scanning engine into which av vendor's scanning modules could be plugged. It could offer a single UI to manage these (i.e. "scan all files", "don't automatically clean" etc.) and could collate the results into a single log. It could improve efficiency by applying each engine in turn to material that is read once, rather than the norm of having each av scanner pull up the material to scan.

MS could be accused of foreclosing opportunities to av vendors (blocking kernel access, competing One Care and Defender products), but this sort of mOS design could open up new opportunities.

Normally, the av market is "dead man's shoes"; a system can have only one resident scanner, so the race is on to be that scanner (e.g. OEM bundling deals that reduce per-license revenue). Once users have an av, it becomes very difficult to get them to switch - they can't try out an alternate av without uninstalling what they have, and no-one wants to do that. It's only when feeware av "dies" at the end of a subscription period, that the user will consider a switch.

But a multi-av mOS allows av vendors to have their engines compared, at a fairly low development cost. They don't have to create any UI at all, because the mOS does that; all they have to do is provide a pure detection and cleaning engine, which is their core compitency anyway.

Chances are, some av vendors would prefer to avoid that challenge :-)

>> XP also comes with a number of restore features such as Recovery
>> Console and the Install CD Repair features.


They are good few-trick ponies, but they do not constitute a mOS. They can't run arbitrary apps, so they aren't an OS, and if they aren't an OS, then by definition that aren't a mOS either.

As it is, RC is crippled as a "recovery" environment, because it can't access anything other than C: and can't write to anywhere else. Even before you realise you'd have to copy files off one at a time (no wildcards, no subtree copy), this kills any data recovery prospects.

At best, RC and OS installation options can be considered "vendor support obligation" tools, i.e. they assist MS in getting MS's products working again. Your data is completely irrelevant.

It gets worse; MS accepts crippled OEM OS licensing as being "Genuine" (i.e. MS got paid) even if they provide NONE of that functionality.

The driver's not even in the car, let alone asleep at the wheel :-(

>> I never use those or find them very useful for security, but they're
>> way more functional and closer to an MOS than the Win98 recovery
>> floppy or anything Win98 ever had. 98 never had a registry
>> editor or a way to modify services like the XP Recovery Console.


They do different things.

RC and installation options can regain bootability and OS functionality, and if you have enabled Set commands before the crisis you are trying to manage, you can copy off files one at a time. They are limited to that, as no additional programs can be run.

In contrast, a Win98EBD is an OS, and can run other programs from diskette, RAM disk or CDR. Such programs include Regedit (non-interactive, i.e. import/export .REG only), Scandisk (interactive file system repair, which NTFS still lacks), Odi's LFN tools (copy off files in bulk, preserving LFNs), Disk Edit (manually repair or re-create file system structure) and run a number of av.

So while XP's tools are bound to getting XP running again, Win98EBD functionality encompasses data recovery, malware cleanup, and hardware diagnostics. It's a no-brainer as to which I'd want (both!)

>>> that at the bare bones level the source code of 9x is more secure


>> It depends on what you consider security.


That's the point I keep trying to make - what Dan refers to is what I'd call "safety", whereas what Karl's referring to is what I'd call "security". Security rests on safety, because the benefit of restricting access to the right users is undermined if what happens is not limited to what these users intended to happen.

>> Win98 was always crashing and unstable,


Er... no, not really. That hasn't been my mileage with any Win9x, compared to Win3.yuk - and as usual, YMMV based on what your hardware standards are, and how you set up the system. I do find XP more stable, as I'd expect, given NT's greater protection for hardware.

>> because there was no protection of memory space from bad apps or
>> bad attackers.


Mmmh... AFAIK, that sort of protection has been there since Win3.1 at least (specifically, the "386 Enhanced" mode of Win3.x). Even DOS used different memory segments for code and data, though it didn't use 386 design to police this separation.

IOW, the promise that "an app can crash, and all that happens is that app is terminated, the rest of the OS keeps running!" has been made for every version of Windows since Win3.x - it's just that the reality always falls short of the promise. It still does, though it gets a little closer every time.

If anything, there seems to be a back-track on the concept of data vs. code separation, and this may be a consequence of the Object-Orientated model. Before, you'd load some monolithic program into its code segment, which would then load data into a separate data segment. Now you have multiple objects, each of which can contain thier own variables (properties) and code (methods).

We're running after the horse by band-aiding CPU-based No-Execute trapping, so that when (not if) our current software design allows "data" to spew over into code space, we can catch it.

>> Microsoft's security problems have largely been because of backwards
>> compatibility with Windows 9x, DOS and Windows NT 4.0. They feel, and I
>> agree, that Microsoft security would be a lot better if they could abandon
>> that backwards compatibility with very old niche software, as they have been
>> doing gradually.


The real millstone was Win3.yuk (think heaps, co-operative multitasking). Ironically, DOS apps multitask better than Win16 ones, as each DOS app lives in its own VM and is pre-emptively multi-tasked.

64-bit is the opportunity to make new rules, as Vista is doing (e.g. no intrusions into kernel allowed). I'm hoping that this will be as beneficial as hardware virtualization was for NT.

Win9x apps don't cast as much of a shadow, as after all, Win9x's native application code was to be the same as NT's. What is a challenge is getting vendors to conform to reduced user rights, as up until XP, they could simply ignore this.

There's also the burden of legacy integration points, from Autoexec.bat through Win.ini through the various fads and fashions of Win9x and NT and beyond. There's something seriously wrong if MS is unable to enumerate every single integration point, and provide a super-MSConfig to manage them all from a single UI.

>Classic Edition could be completely compatible with the older software
>such as Windows 3.1 programs and DOS programs. Heck, Microsoft
>could do this in a heartbeat without too much trouble.


Think about that. Who sits in exactly the same job for 12 years?

All the coders who actually made Win95, aren't front-line coders at MS anymore. They've either left, or they've climbed the ladder into other types of job, such as division managers, software architects etc. To the folks who are currently front-line coders, making Vista etc., Win9x is as alien as (say) Linux or OS/2.

To build a new Win9x, MS would have to re-train a number of new coders, which would take ages, and then they'd have to keep this skills pool alive as long as the new Win9x were in use. I don't see them wanting to do that, especially as they had such a battle to sunset Win9x and move everyone over to NT (XP) in the first place.

Also, think about what you want from Win9x - you may find that what you really want is a set of attributes that are not inherently unique to Win9x at all, and which may be present in (say) embedded XP.


If you really do need the ability to run DOS and Win3.yuk apps, then you'd be better served by an emulator for these OSs.

This not only protects the rest of the system to the oddball activities of these platforms, but can also virtualize incompatible hardware and mimic the expected slower clock speeds more smoothly than direct execution could offer. This is important, as unexpected speed and disparity between instruction times is as much a reason for old software to fail on new systems as changes within Windows itself.

>I will do what it takes to see this come to reality.


Stick around on this, even if there's no further Win9x as such. As we can see from MS's first mOS since Win98 and WinME EBDs, there's more to doing this than the ability to write working code - there has to be an understanding of what the code should do in the "real world".
 

13 August 2007

CDRW/DVDRW Primer

It can be a bit confusing figuring out R vs. RW and formal authoring vs. packet writing, but I'll try.  This skips a lot of detail, and attempts to zoom on what you'd need to know if starting on writing CDs or DVDs in 2007...

Here's the executive summary:

  R RW
     
Authored Fine Fine
Packet-written Can't Sucks

R vs. RW disks

R(ecordable) disks are like writing in ink - once you've written, you cannot erase, edit or overwrite.

R(e)W(ritable) disks are like writing in pencil - you can rub out what you want to change, but what you write in there, has to fit between whatever else you have not rubbed out.

Authoring vs. packet writing

The "authoring" process is like setting up a printing press; you first lay out the CD or DVD exactly as you want it, then you splat that onto the disk.  You can fill the whole disk at once, like printing a book (single session), or you can fill the first part and leave the rest blank to add more stuff later, like a printed book that has blank pages where new stuff can be added (start a multisession).

The "packet writing" process is what lets you pretend an RW disk is like a "big diskette".  Material is written to disk in packets, and individual packets can be rubbed out and replaced with new packets, which pretty much mirrors the way magnetic disks are used.  This method is obviously not applicable to R disks.

RW disks can also be authored, but the rules stay the same; you either add extra sessions to a multi-session disk, or you erase the whole disk and author it all over again.

Overwriting

When you overwrite a file in a packet-writing system, you do so by freeing up the packets containing the old file and write the new file into the same and/or other packets.  The free space left over is increased by the size of the old file and reduced by the size of the new, rounded up to a whole number of packets.

When you "overwrite" a file in a multisession (authored) disk, it is like crossing out the old material and writing new material underneath, as one is obliged to do when writing in ink.  The free space drops faster, because the space of the old file cannot be reclaimed and re-used, and because each session has some file system overhead, no matter how small the content.

Standards and tools

There are a number of different standard disk formats, all of which must be formally authored; audio CDs, movie DVDs, CD-ROMs and DVD-ROMs of various flavors.  In contrast, packet-written disk formats may be proprietary, and supported only by the software that created them.

Nero and Easy CD Creator are examples of formal authoring tools, and several media players can also author various media and data formats.

InCD and DirectCD are examples of packet-writing tools, which generally maintain a low profile in the SysTray, popping up only to format newly-discovered blank RW disks.  The rest of the time, they work thier magic behind the scenes, so that Windows Explorer can appear to be able to use RW disks as "big diskettes".

Windows has built-in writer support, but the way it works can embody the worst of both authoring and packet-writing models.  I generally disable this support and use Nero instead.

Flakiness

RW disks and flash drives share a bad characteristic; limited write life.  In order to reduce write traffic to RW disks, packet writing software will hold back and accumulate writes, so these can be written back in one go just before the disk is ejected.

What this means is that packet written disks often get barfed by bad exits, lockups, crashes, and forced disk ejects.  Typically the disk will have no files on it, and no free space.  When this happens, you can either erase the disk and author it, or format the disk for another go at packet writing.  Erasing is faster, while formatting applies only to packet writing (it defines the packets).

I have found that packet writing software has been a common cause of system instability (that often ironically corrupts packet-written disks).  The unreliability, slow formatting, and poor portability across arbitrary systems have all led me to abandon packet writing in favor of formally authoring RW disks. 

Back to Basics

09 August 2007

Evolution vs. Intelligent Design

Evolution vs. Intelligent Design = non-issue.

Evolution does not define why things happen. 

It is a mechanism whereby some things that happen, may come to persist (and others, not). 

Graded belief

Human thinking appears to have at least two weaknesses; an automatic assumption of dualities (e.g. "Microsoft and Google are both large; Microsoft is bad, therefore Google must be good"), and an unwillingness to accept unknowns. 

You can re-state the second as a tighter version of the first, i.e. the singleton assumption, rather than duality.

We don't even have words (in English, at least) to differentiate between degrees of belief, i.e. weak ("all things equal, I think it is more likely that A of A, B, C is the truth") and strong ("iron is a metal") belief. 

And I fairly strongly believe we strongly believe too often, when a weaker degree of certainty would be not only more appropriate, but is a needed component in our quest for Word Peace TM.

For example, religious folks have a fairly high certainty of what will happen after they die - but can we all accept that as they have not yet died, that something slightly less than "you're wrong, so I'll kill you" certainty should apply?

What is evolution?

My understanding of evolution, or Darwinian systems, revolves around the following components:

  • limited life span
  • selection pressure
  • imperfect reproduction

That's what I consider to be a classic evolutionary environment, but you may get variations; e.g. if entities change during the course of their lifetime, do not reproduce, but can die, then you could consider this as a Darwinian system that is ultimately set to run down like a wind-up clock as the number of survivors declines towards zero.

In fact, implicit in that classic model is the notion of reproduction based on a self-definition that does not change during the course of an entity's lifetime (in fact, it defines that entity) but can change when spawning next generation entities.

Evolution is blind

Evolution per se, is devoid of intent.  I don't know whether Darwin stressed this in his original writings, but I weakly believe that he did; yet I often see descriptions of creatures "evolving to survive". 

As I understand it, game theory is a reformulation of evolution that centers on the notion of survival intent.

Evolution is something that happens to things, and doesn't "care" whether those things survive or not.  The "selfish gene" concept is an attempt to frame this inevitable sense of "intent" within Darwinian mechanics; there is no more need to ascribe a survival intent to genes, as there is to the phenotypes they define.

However, evolution doesn't have to be the only player on the stage, and this is what I meant about "evolution vs. intelligent design is a non-issue".

I don't think there's much uncertainty that evolution is at work in the world.  That doesn't weigh for or against other (intelligent design) players in the world, and that's why I consider the question a non-issue.

Intelligent players can apply intent from outside the system (e.g. where an external entity defines entities within a Darwinian environment, or the environment itself, or its selection pressures) or from within the system (e.g. where entities apply intent to designing their own progeny).

Example systems

I consider the following to be Darwinian systems:

  • the biosphere
  • the infosphere
  • human culture, i.e. memetics

One could theorize that evolution is an inevitable consequence of complexity, when subjected to entropy.  Just as a moving car is not fast enough to exhibit significant relativistic effects (so that Newton's laws appear to explain everything), so trivial systems may be insufficiently complex to demonstrate Darwinian behavior.

This is why I'm interested in computers and the infosphere; because they are becoming complex enough to defy determinism. 

Normally, we seek to understand the "real world" by peering down from the top, with insufficient clarity to see the bottom. 

With the infosphere, we have an environment that we understand (and created) from the bottom up; what we cannot "see" is the top level that will arise as complexity evolves.

This creates an opportunity to model the one system within the other.  What was an inscrutable "mind / brain" question, becomes "the mind is the software, the brain is the hardware", perhaps over-extended to "the self is the runtime, the mind is the software, the brain is the firmware, the body is the hardware". 

We can also look at computer viruses as a model for biosphere viruses.  A major "aha!" moment for me was when I searched the Internet for information on the CAP virus, and found a lot of articles that almost made sense, but not quite - until I realized these described biological viruses, and were found because of the common bio-virus term "CAPsule".

Code

Common to my understanding of what constitutes a classic Darwinian system, is the notion of information that defines the entity. 

In the biosphere, this is usually DNA or RNA, a language of 4 unique items grouped into threes to map to the active proteins they define.

In the infosphere, this is binary code of various languages, based on bits that are typically grouped into eights as bytes.

In the meme space, languages are carried via symbol sets that are in turn split into unique characters, which are then clumped into words or sentences.  Some languages contain less information within the character set (e.g. Western alphabets), others more (e.g. the Chinese alphabet, ancient Egyptian hieroglyphics, modern icons and branding marks).

When we create computer code, we are laboriously translating the memetic language of ideas into code that will spawn infosphere entities.  This is not unlike the way a set of chromosomes becomes a chicken, other than that we view the infosphere and meme space as separate Darwinian systems.

The alluring challenge is to translate infosphere code into biosphere code, i.e. to "print DNA", as it were.  One hopes the quality of intent will be sound, by the time this milestone is reached, as in effect, we would be positioned to become our own intelligent creators.

Intelligent design

We know that entities in the infosphere are created by intent from outside the system; as at 2007, we do not believe that new entities arise spontaneously within the system.

We don't know (but may have beliefs about) whether there is intent applied to the biosphere, or whether the biosphere was originally created or shaped by acts of intent.

Conspiracy theorists may point to hidden uber-intenders within the meme space, the creation of which is inherently guided by self-intent.

That which was

Just as folks mistakenly ascribe intent to the mechanics of evolution, so there is a fallacy that all that exists, is all that existed.

But evolution can tell you nothing about what entities one existed, as spawned by mutation or entropic shaping of code.

There is a very dangerous assumption that because you cannot see a surviving entity in the current set, that such entities cannot arise. 

Think of a bio-virus with a fast-death payload a la rabies, plus rapid spread a la the common cold.  The assumption of survival intent leads folks to say stupid things like "but that would kill the host, so the virus wouldn't want that".  Sure, there'd be no survivors in today's entity set, but on the other hand, we know we have some historical bulk extinctions to explain.

We're beginning to see the same complacency on the risks of nuclear war.  We think of humanity as a single chain of upwards development, and therefore are optimistic that "common sense will prevail".  Even nay-sayers that point to our unprecedented ability to destroy ourselves, miss the point in that word "unprecedented".  As Graham Hancock postulated in his Fingerprints of the Gods, we may have been this way before.

This blindness applies to malware within the infosphere as well, and the saying "there's none so blind as those who can see" applies.  If we want authoritative voices on malware, we generally turn to professionals who have been staring at all malware for years, such as the antivirus industry.  These folks may be blinded by what they've seen of all that has been, that they fail to consider all that could be.

The Waking Hour

07 August 2007

Low Heap Space in XP and Vista

Technorati tags: , ,

Have you ever been motoring along in XP or Vista, opening up new tabs in IE7, running apps in the background, etc. and noticed new tabs don't show the pages, or that when you right-click a link, you don't get a context menu?  Or have you ever opened a couple of dozen photos in Irfan View and found the last few come up with no menu, and don't respond to hotkeys?

Did this bring back memories of "low resource heaps" in Win9x, or the need to restart DOS/Win3.yuk PCs several times a day to keep MS Office apps running properly?

You aren't going crazy, and yes, it's the same problem.  It's like deja vu all over again.

Background

Windows 3.x used two or three "heaps", i.e. areas of RAM set aside for certain items that are spawned by running programs.  There was a GDI heap for graphic elements, a user heap for UI and other elements, and a "system" heap that may have been a logical view of the first two (it's been a while, and I hoped I never have to remember the details again).

When folks started multitasking in earnest, these heaps would fill up and cause crashes, half-drawn UI elements, or spurious "out of memory" errors.  Adding RAM was as useful as adding a trailer to a removal van with a loading crew so dumb they insist on putting all metal objects in the glove compartment and then tell you "we're full" after moving your hi-fi.

Windows 95 design intended to move these heaps to 32-bit replacements, but it was found that doing so would break several applications (including, it was rumored, Excel 4.0) which wrote directly to heap objects in memory rather than using the proper API calls.

So Windows 95 went into a long public beta where these things were thrashed out.  Win9x left some items in "legacy" 16-bit heaps, and proactively cleared heap allocations when closing down VMs.  32-bit and DOS programs were run in their own VM, so this curbed "heap leakage" for these, but as 16-bit Windows all ran in a single shared VM, any resident 16-bit Windows program (e.g. Bitware 3.x) could hold this VM open forever.

Pundits would smugly point out that NT was a "true" 32-bit OS which did not make such compromises.  The "huge" 32-bit address range should mean 32-bit heaps would never run out again.

The broken rule is...

"Do not use finite global storage for the per-instance data of an unbounded number of instances". 

There are several instances of this rule being broken, such as the way Windows Explorer "remembers" settings for different folders. 

The converse rule...

"Do not scale a per-instance resource based on evolving global store capacity"

...is broken by IE''s grabbing of X% of total HD volume space for its web cache, as well as System Restore capacity allocation.  The first example was only fixed in IE7; the second, continues.

NT broke the first of these two scalability rules.  The NT developers did not fix the heap problem via dynamically-sized heaps that could expand up to the limits of 32-bit addressability.  Instead, they set various arbitrary limits at various times, in fact reducing these between versions 3.1 and 3.5 of the original NT. 

The fix

Apparently this has been a known issue at Microsoft, though most of us (myself included) didn't see the old "resource heap blues" for the first few years on XP.  I found some good background coverage in blogs and elsewhere, but the fix (adjusting certain registry settings) comes with some caveats:

Please do not modify these values on a whim. Changing the second or third value too high can put you in a no-boot situation due to the kernel not being able to allocate memory properly to even get Session 0 set up

I haven't applied the fix yet, but probably will, on my XP SP2 system with IE7 etc.  If it holds up here, and I have complaints from Vista users, then I may apply the fix to my new Vista PC builds as well.

Coverage

As you can guess, some blogs have been quite hostile to Microsoft, and this one is no exception (to me, it seems as if these folks missed the "Scalability 101" lecture).

However, I can see some reasons why they may have chosen not to create a dynamically-resizing heap system. 

One reason is to prevent DoS memory usage through deliberate leaks; another might be concern over exploitable race conditions that might arise if one process is creating heap objects while another deliberately releases heap space to provoke a downwards resize. 

There may be unacceptable overhead in managing more scalable data structures, such as linked lists, if heap objects are accessed as often as I suspect they might be.

Whatever such reasons might be, I'd suggest Microsoft get their story up really soon, before the trickle turns to a flood and users twig onto why they can't do "too many things at the same time" on their PCs - especially given the usual Vista mantra of "it's slow and needs more resources, but that's so it can scale up to future needs" that I've been known to wave around  :-)

02 August 2007

Seek, And Ye Shall Find... What?

Technorati tags:

Re: Can a saved search be indistinguishable from real folders?

On Sat, 28 Jul 2007 05:34:00 -0700, Baffin

>To my mind, it would be elegant and correct for the operating system to
>present a 'saved search' folder to all appolications exactly as a real folder
>is presented.  Thus no changes should be required to any applications -- they
>should all just work as usual -- instant access to saved-search collections
>-- great!

Think through the safety and security implications of that.

>But as mentioned originally, I'm having problems getting many applications
>to work with saved-search folders -- am I doing something wrong?  Or, could
>Microsoft have implemented 'virtual folders' (saved searches)
>non-transparently?  If so, why?  What's the advantage worth all the
>disruption that would cause to applications?

I think there was a change in design intention on this. 

Originally, when Vista was to embed SQL within the WinFS file system, these "virtual folders" were to function more transparently as folders, as you expect.

When WinFS was dropped, functionalities of "virtual folders" were scaled  back - I'd thought they had been dropped alltogether.

Just as web mania drove MS to embed IE4 in Win98, with "View As Web Page" on your local file system, so search mania has driven MS to embed search into Vista. 

Just as there were safety downsides to blurring the edge between Internet and local PC (as well as HTML-everywhere also dropping scriptability everywhere), so may there be safety downsides to searching for rather than specifying the files etc. you "open".

>Why can't it be 'invisible' to all existing applications whether or not a
>folder is real or 'virtual' (ie., a saved search)?

The original intention of file names was to ensure that every file was uniquely named.  When this hit scalability issues, the concept of directories and paths was added.

The need to uniquely identify files is as strong as ever, in an age of pervasive malware, phishing, etc. but is also necessary to avoid "version soup" problems, reversion to pre-patch code, and confusion between old and new versions of data files that may be scattered across "live", backup, and off-PC storage locations.

When you throw away that specificity and just "search" for things, you need to be very sure about what you are looking at.  Not easy, through a shell that hides file name extensions, allows dangerous file types to define their own icons, etc.

So yes; I *definately* want it to be very obvious as to whether I am looking at a directory, or some virtual collection of found items.

Public Conversations

When ChkDsk Doesn't

Technorati tags: , ,

Subject: Re: Unable to run CHKDSK with "Fix" option

On Wed, 25 Jul 2007 17:46:03 -0700, Paulie

I think it is beyond time that we had a proper interactive file system maintenance tool for NTFS.  ChkDsk is a relic from the MS-DOS 5 days; I wish NT would at least catch up to, say, MS-DOS 6 Scandisk.

Now folks will flame me for saying that.  "File systems are too complex for users to understand, just trust us to fix everything for you".  Fine; let's read on and see how well that works...

>My new Notebook is unable to run a CHKDSK with the Fix option selected.
>I can run a normal CHKDSK within VISTA and it works without a problem.
>If I choose the Fix option it schedules a scan on the next boot. Upon
>rebooting,
>CHKDSK will begin but it will freeze after 8% of the scan. The
>Notebook does not respond and I have to power it down and restart.

Great, so now we combine a possibly corrupted file system in need of repair, with recurrent bad exits.  What's wrong with this picture?

>Any idea what this may be?

Given that ChkDsk and AutoChk are closed boxes with little or no documentation of what they are doing (and little or no feedback to you while they are doing it), one can only guess.

My guesses would be one of:

1)  Physically failing HD

When a sector can't be read, the HD will retry the operation a number of times before giving up.  Whatever driver code that calls the operation will probably also retry a few times, before giving up, and so may the higher-level code that called that, etc. 

The result can be an apparent "hang" lasting seconds to minutes while the system beats the dying disk to death.

That's before you factor in futile attepts to paper over the problem and pretend it isn't there, both by the HD itself, and by the NTFS code.  Each will attempt to read the sick allocation unit's data and write it to a "good" replacement, then switch usage so that the dead sector is avoided in future.  And so on, for next dead sector, etc.

2)  Lengthy repair process

Scandisk and ChkDsk have no "big picture" awareness.  If they were you, walking from A to B, they would take a step, calculate if they were at B, then take another, and repeat.  If they were walking in the wrong direction, away from B, they'd just keep on walking forever.

So when something happens that invalidates huge chunks of the file system, these tools don't see the "big picture" and STOP and say "hey, something is invalidating the way this file system is viewed".  No; they look at one atom of the file system, change it to fit the current view, and repeat for the next.  If that means changing evey atom in the file system, that is what they will do.  Result; garbage.

3)  Bugginess

Whereas (2) is a bad design working as designed, sometimes the code doesn't work as designed and falls off the edge.

Needless to say, AutoChk and ChkDsk don't maintain any undoability. They also "know better than you", so they don't stop and ask you before "fixing" things, they just wade in and start slashing away.

>I have run full virus scans and updates on the drive and
>there are no issues. Other than this the Notebook runs fine. Its just a
>strange issue that i am unable to fix at this stage.

I would at least exclude (1) by checking the HD's surface using the appropriate tests in HD Tune (www.hdtune.com), after backing up my data.  You should be able to get a "second opinion" on the file system, but you can't; ChkDsk and AutoChk are all you have.

Public Conversations

Malware Poll Results

Technorati tags:

I played with Blogger's "poll" feature, asking whether malware was or was not reader's biggest PC headache.  The results:

  • 16% - Malware is my biggest PC headache
  • 66% - Malware is not my biggest PC headache
  • 0% - I'm OK, I use Linux / BSD / other
  • 16% - I'm OK, I use a Mac
  • 0% - I'm not on Windows, but malware is still a worry

That's on a tiny sample size of 12 respondents  :-)

Public Conversations

Technorati tags:

A lot of what I write is posts in transient newsgroups and forums, and although I resolved to write things "properly" in web pages and then blogs, I tend not to do this.  I find it far easier to respond to another entity than write "cold" for a generalized audience.

As mentioned, I'm blending blogging with formal web site navigation using whatever tools come to hand.  I'll use Blogger's labels as category selectors so that this blog can function as a set of unrelated blogs.  These "virtual" blogs may include:

  • Public Conversations
  • The Waking Hour
  • Maintenance OS
  • Safety and Security

The first of these is Public Conversations, where I will simply paste what I see as interesting posts that are already public, edited only to allow natural line lengths and to break visible email addresses.

Over at my other blog, I've added a few new photo galleries, using "comments" to use these as pictorial articles rather than just a  bunch of loose photos.  So far, these include:

Both Blogger and Live Spaces have their strengths and weaknesses, so I'll probably continue adding new content to both.

Ship Now, Patch Later

Technorati tags: ,

Subject: Re: It would be nice if MS could settingle on a single subnet for updates

On Fri, 27 Jul 2007 15:13:52 +0100, "Mike Brannigan"
>"Leythos" <void@nowhere.lan> wrote in message
>> Mike.Brannigan@localhost says...

This thread is about the collision between...

    No automatic code base changes allowed

...and...

    Vendors need to push "code of the day"

Given the only reason we allow vendors to push "code of the day" is because their existing code fails too often for us to manage manually, one wonders if our trust in these vendors is well-placed.

A big part of this is knowing that only the vendor is pushing the code, and that's hard to be sure of.  If malware were to hijack a vendor's update pipe, it could blow black code into the core of systems, right pas all those system's defenses.

With that in mind, I've switched from wishing MS would use open standards for patch transmission to being grateful for whatever they can do to harden the process.  I'd still rather not have to leave myself open to injections of "code of the day", though.

>NO never ever ever in a production corporate environment do you allow ANY of
>your workstations and servers to directly access anyone for patches
>I have never allowed this or even seen it in real large or enterprise
>customers. (the only place it may crop up is in mom and pop 
>10 PCs and a Server shops).

And there's the problem.  MS concentrates on scaling up to enterprise needs, where the enterprise should consolodate patches in one location and then drive these into systems under their own in-house control.

So scaling up is well catered for.

But what about scaling down? 

Do "mom and pop" folks not deserve safety?  How about single-PC users which have everything they own tied up in that one vulnerable box?  What's best-practice for them - "trust me, I'm a software vendor"?

How about scaling outwards? 

When every single vendor wants to be able to push "updates" into your PC, even for things as trivial as prinyers and mouse drivers, how do you manage these?  How do you manage 50 different ad-hoc update delivery systems, some from vendors who are not much beyond "Mom and Pop" status themselves?  Do we let Zango etc. "update" themselves?

The bottom line: "Ship now, patch later" is an unworkable model.

>As you said your only problem is with Microsoft then the solution I have
>outlined above is the fix - only one server needs access through your
>draconian firewall policies.  And you get a real secure enterprise patch
>management solution that significantly lowers the risk to your environment.

That's prolly the best solution, for those with the resources to manage it.  It does create a lock-in advantage for MS, but at least it is one that is value-based (i.e. the positive value of a well-developed enterprise-ready management system).

However, I have to wonder how effective in-house patch evaluation really is, especially if it is to keep up with tight time-to-exploit cycles.  It may be the closed-source equivalent of the open source boast that "our code is validated by a thousand reviewers"; looks good on paper, but is it really effective in practice?

Public Conversations