17 December 2007

Malware "War", Lost Territory

Technorati tags: ,

I've often seen the malware situation described as a "war", and conventionally, wars are fought over territory. 

What territory has been lost to malware?

Consider various integration points that are now routinely defended against usage, on the basis that the only things likely to use these, are malware.  These OS "features" are now effectively "owned" by malware, in that legitimate software will trigger defence alerts if they are used.

Consider a number of ill-advised features that are designed to allow arbitrary material to automate the system, e.g. MS Word auto-running macros, auto-running scripts in HTML email "messages", \Autorun.inf processing on USB flash drives, etc.  Today, these will typically be disabled, because the most likely use will be by malware.  So Malware "own" that, too.

Consider several business models that involve messages, attachments or links sent by the service's site, such as email greeting cards.  As malware can arrive via forgeries of such messages, usage is limited to those who are too dumb to know the risk they are expecting the recipient to take, which is a smaller and more limited demographic than when such services were first started.  Effectively, these kinds of businesses and practices have been killed by malware.

Should we scorch and abandon some of this territory?  For example, remove OS integration points that are hardly ever used by anything other than malware?

Should we assess likely future "ownership" before creating new technologies and features that are likely to be swamped by malware?

18 November 2007

Norton Security Scan - False Positives

Technorati tags: , , ,

The Norton Security Scan utility is free, and bundled with the Google Pack.  It's an on-demand scanner that looks for malware and risks.

Unfortunately, it detects protective settings applied by Spyware Blaster and similar tools, as being the malware these tools are protecting against. This is a generic type of bug that often arises when tools assume anything other than default is a hostile change, or when overly-loose detection cues are in effect.

Specifically, settings within HKCU's P3P\History that block unwanted cookies, are detected as evidence of malware.  In the case I'm currently working on, only around 5 of over 100 protective entries were detected in this way.

The tool then claims it is unable to fix these problems, which is just as well, as doing so would actually weaken system safety.  The end result is similar to Winfixer et al, i.e. false-positive (actually, reverse-positive) detections plus referral to feeware products if these are to be "fixed" - so one hopes Symantec will fix this sooner rather than later.

The case I'm working on is interesting, as it was brought in because it was slowing down, with malware as the suspected cause.  Formal scanning finds no active malware, and one wonders if the slowdown was the result of installing Google desktop etc., with the false-positive from Norton Security Scan as the red herring.

I'd like to try Norton Security Scan within mOS contexts such as Bart or WinPE CDR boot, but it appears as if the product is available only via Google Pack.  There are no references to it at Symantec's site, and the FAQ doesn't seem to consider "so where can I download this thing?" to be "frequently asked".

SARS Tax Returns vs. Acrobat Reader

Those using the South African Revenue Service (SARS) e-filing facility may find non-default safety settings within Adobe Acrobat Reader get in the way.

How to fix

Most likely you need only enable JavaScript, but when I had to troubleshoot this in the field, I applied all of the following settings...

Run Adobe Acrobat Reader 8.x

Edit menu, Preferences

JavaScript icon, [x] Enable Acrobat JavaScript

Multimedia Trust icon, Trusted Documents radio button, [x] Allow Multimedia Operations

Multimedia Trust icon, Other Documents radio button, [x] Allow Multimedia Operations

Trust Manager icon, [x] Allow Opening of Non-PDF File Attachments with External Applications

...and reversed them for safety when done.

Why use safer settings?

If these non-default settings stop things like SARS e-filing from working, why apply them? 

Because Acrobat files are already being exploited by spam, and a significant safety gap exists between what you think a .PDF is (i.e. a data format that is safe to read) and what it can do (automate your system via JavaScript, launching of other files and code, etc.).

Acrobat Reader is an exploitable surface that has often been patched to "fix" it, and for which unpatched vulnerabilities often exist.  Commercial enterprises have already exploited the by-design safety gap, e.g. by having .PDF documents "call home" when they are read, so that their usage can be tracked. 

So one should keep Acrobat Reader on a very short leash, or use something else to "open" .PDF and other Acrobat file types.

12 October 2007

Understanding Integers

Technorati tags:

Which is the largest of these rational numbers?

  • 1.5075643002
  • 5.2
  • 5.193
  • 5.213454

If you say 5.213454, then you're still thinking in integer terms.  If you say 1.5075643002 is the largest within a rational number frame of reference, then you thinking as I am right now.

Is Pi an irrational number, or have we just not effectively defined it yet?  With the Halting Problem in mind, can we ever determine whether Pi is rational or irrational?  On that basis, is there such a thing as irrational real numbers, or are these just rational numbers beyond the reach of our precision?  Unlike most, this last question will be answered within this article.

Rational numbers

I was taught that rational numbers were those that can be expressed as one integer divided by another - but I'm reconsidering that as numbers that lie between fixed bounds, or more intuitively, "parts of a whole".

When we deal with rational numbers in everyday life, we're not really dealing with rational numbers as I conceptualize them within this article.  We're just dealing with clumps of integers. 

To enumerate things, there needs to be a frame of reference.  If you say "six" I'll ask 'six what?', and if you say "a quarter", I'll ask 'a quarter of what?'

Usually, the answer is something quite arbitrary, such as "king's toenails".  Want less arbitrary?  How about "the length of a certain platinum-iridium bar in Paris stored at a particular temperature" - feel better now?

Your cutting machine won't explode in a cloud of quantum dust if you set it to a fraction of a millimeter; within the machine's "integers", are just more smaller "integers".  If you think you understand anything about rational numbers from contexts like these, you're kidding yourself, in my humble opinion.

Integers

To put teeth into integers, they have to enumerate something fundamental, something atomic.  By atomic, we used to say "something that cannot be divided further"; today we might say "something that cannot be divided further without applying a state change, or dropping down into a deeper level of abstraction".

Ah - now we're getting somewhere!  Levels of abstraction, total information content, dimensions of an array... think of chemistry as a level of abstraction "above" nuclear physics, or the computer's digital level of abstraction as "above" that of analog volts, nanometers and nanoseconds.

If layers of abstraction are properly nested (are they?), then each may appear to be a closed single atom from "above", rational numbers from "within", and an infinite series of integers from "below".  Or not - toss that around your skull for a while, and see what you conclude.

Closed systems

Within a closed system, there may be a large but finite number of atomic values (or integers, in the non-ordered sense), being the total information content of that system.  If rational numbers are used to describe entities within the system, they are by necessity defined as values between 0 and 1, where 1 = "the system".  In this sense, 7.45 is not a rational number, but might be considered as an offset 7 from outside the system, and .45 within the system.

You might consider "size" as solidity of existence, i.e. the precision (space) or certainty (time, or probability) at which an entity is defined.  If you can define it right down to specifying exactly which "atom" it is, you have reached the maximum information for the closed system.  So 0.45765432 is a "larger" number than 0.5, in terms of this closed-system logic.

You can consider integers vs. rational numbers as being defined by whether you are specifying (or resolving) things in a closed system (rational numbers, as described within this article) or ordering things in an open system (integers). 

What closes an integer system is your confidence in the order of the integers you enumerate.  What closes a rational system is whether you can "see" right down to the underlying "atoms".

Information and energy

Can one specify an entity within a closed system with a precision so high that it is absolute, within the context of that system? 

We may generalize Pauli's exclusion principle to state that no two entities may be identical in all respects (or rather, that if they were, they would define the same entity).

Then there's Heisenberg's uncertainty principle, that predicts an inability to determine all information about an entity, without instantly invalidating that information.  Instantly?  For a zero value of time to exist, implies an "atom" of time that zero time is devoid of... otherwise that "zero" is just a probability smudge near one end of some unsigned axis (or an arbitrary "mid-"point of a signed axis).

Can you fix (say) an electron so that its state is identical for a certain period of time after it is observed?  How much energy is required to do that?  Intuitively, I see a relationship between specificity, i.e. the precision or certainty to which an entity is defined, and the energy required to maintain that state.

Entropy

If "things fall apart", then why?  Where does the automatic blurring of information come from?  Why does it take more work to create a piece of metal that is 2.6578135g in mass than one manufactured to 2.65g with a tolerance of 0.005g?

One answer may be; from deeper abstraction layers nested within what the current abstraction layer sees as being integer, or "atomic".  The nuclear climate may affect where an electron currently "is" and how likely it is to change energy state; what appears to be a static chemical equilibrium could "spontaneously" change, just as what appears to be reliable digital processing can be corrupted by analog voltage changes that exceed the trigger points that define the digital layer of abstraction. 

In this sense, the arrangement of sub-nuclear entities may define whether something is a neutron or a proton with an electron somewhere out there; the difference is profound for the chemical layer of abstraction above.

To freeze a state within a given layer of abstraction, may require mastery over deeper levels of abstraction that may "randomize" it.

Existence

What does it mean, to exist?  One can sense this as the application of specificity, or a stipulation of information that defines what then exists.  Our perspective is that mass really exists, and just happens to be massive in terms of the energy (information?) contained within it. 

There's a sense of energy-information conservation in reactions such as matter and antimatter annihilating their mass and producing a large amount of energy.  How much energy?  Does that imply the magnitude of information that defined the masses, or mass and anti-mass?  Do you like your integers signed or unsigned?  Is the difference merely a matter of externalizing one piece of information as the "sign bit"?  What do things look like if you externalize two bits in that way?

Like most of my head-spin articles, this one leaves you hanging at this point.  No tidy summary of what I "told" you, as I have no certainty on any of this; think of this article as a question (or RFC, if you like), not a statement.

10 October 2007

Navigation via Recent Comments

Here's something this blog host needs; an ability to zoom in on the most recent comments, irrespective of where they are, so one can comment on them. If this facility is present, it needs to be more discoverable.

As it is, one goes in and moderates unmoderated comments, but having done so, they vanish from easy navigation so one can't follow them up to reply.

Oh... some more general "CQspace" news; I intend to focus more on maintenance OS issues and development (with a small "d", i.e. how to make your own projects by tailoring existing mOSs) and will do that at what is currently called "CQuirke's Linux Curve", as that blog host appears to have the best oomph to carry the blog-to-website transition I am after. As part of that focus, I'll still be learning Linux and blogging that as I go, but it will be a subset of that site as a whole.

11 September 2007

New Blog Elsewhere

Technorati tags:

I've started a third blog here, mainly because I liked the look of the hosting service:

  • No "bad cookie" alerts, unlike here
  • Richer feature set
  • Better suited to "normal" web site structure

Normally, each blog has a "theme"; this one is general, the other blog is about Vista, and the new one might be about Linux if I get traction with that.

I've checked out Linux from time to time, and this time I'm prompted to do so by what I see as deteriorating vendor trustworthiness, coupled with tighter vendor dependence; activation false-positives, WGA service failures that triggered (in this case, mild) DoS effects, OEM MS Office 2007 sold as "air boxes" i.e. no installation disks, and poor responsiveness and documentation on these issues.

I'm also checking out Linux as a potential maintenance OS (mOS) for Vista; possibly one that can service all Windows versions plus Linux itself.  The newest Ubuntu 7.x claims safe writeable support for NTFS, and until we see RunScanner functionality for Vista, that evens the playing field compared to Bart and WinPE (in other words, none of them can do for Vista what Bart can do for XP).

I expect it will take a year to build satisfactory skills in mOS for Vista, and longer to get a handle on Linux - which means if I want to be positioned to switch to Linux in a few year's time, the time to start studying it is now. 

The standards I set for myself as a PC builder require custom-installable disks to ship for all installed software.  Failing that, unrestricted and anonymous download is an acceptable alternative only if that is compatible with systems that have either no Internet connectivity, or slow and costly dial-up access.

OEM MS Office 2007 already fails this standard, and I refuse to sell it accordingly.  Given the stealth with which Microsoft has manipulated OEM MS Office 2007, I cannot assume similar changes impacting on Vista will occur only when the next version of Windows is released.  So starting on a years-long mOS development path may be a waste of time, if such work is applicable to Windows alone.

The first prize would be a Windows that isn't chained to sucky vendor politics, and I will continue to work towards that where possible.  If Windows becomes unacceptable, it would be quite a setback in many ways, but that lump may have to be swallowed... let's hope cooler heads kick in an Microsoft, so that we can still stay with the platform we already know and use!

7 September 2007

WGA, Product Activation, Kafka

If Kafka wrote the Windows activation/WGA FAQ...

A: You have been found guilty and have been sentenced to die in 3 days.  Would you like to appeal?

Q: What crime am I being charged with?

A: Our code has found you guilty of being guilty in the opinion of our code.  You have already been found guilty and sentenced.  Would you like to appeal?

Q: What laws have I broken?

A:  The laws you have broken are those we assigned ourselves via the EUL"A" you consented to when you accepted our product.

Q: I don't remember discussions about an End User License Agreement?

A: Well, you wouldn't; we find it more effective to just write that up ourselves.

Q: I need details... what exact laws have I broken?

A:  We find it more effective not to disclose the details on how our code investigates such matters, or what criteria are used to determine the breaking of our laws.  All you need to know is that you have been found guilty and sentenced. 

Would you like to appeal?

Q: What do you mean "die in 3 days"?

A: In three day's time, your heart will be removed and further processing will not be possible.  If you do not have recourse to another heart and/or cardiac troubleshooting skills, you will remain inert.  Your body parts will still be available to those with the skills to access them; don't worry, no personal data will be lost, though of course you will need a new heart from us to work with that data again.

Would you like to appeal?

Q:  OK, I'd like to appeal.  Who do I appeal to?

A: Us, of course.  Phone the number, answer a few trick questions like "press 1 if you have two or more, press 2 if you have only one" etc. and then ask to speak to a human.  Convince the human you are innocent and your death sentence will be set aside.  If you are innocent, you have nothing to fear!

Q:  What do you mean "if I'm innocent"?  You've just told me you've found me guilty, and refused to tell me exactly of what I'm guilty?

A:  This is true, but we are not unfair.  You do have the right to appeal, as at September 2007.

Q:  So how do I present my case?

A: Leave that to us.  We will ask you questions, and based on your answers, we will decide if perhaps we arrested, tried and sentenced you in error, or whether our code works as designed.

Q: Works as designed?  What is it designed to do?

A:  It's designed to determine whether you are guilty or not.  We find it is more effective not to disclose details of how it does this.

Q: Can I review the evidence?

A:  We find it is more effective if the guilty party is not permitted to review the evidence, and thus we provide no tools to do so, nor do we provide documentation of what this evidence may be.  Any such documentation you may find will have been subject to change.  We will not tell you if, when or how it has changed, if indeed it has.  We find it is more effective this way.

Q:  OK!  Hey, everything's fine! I spoke to the human and explained what happened (which was easy in my case; nothing happened or changed, you just charged me out of the blue) and they set aside my sentence!  Thanks you running such a wonderful system that allows a lowly wretch like me to live again!!

A:  It's a pleasure, glad to help   ;-)

2 September 2007

Three Little Pigs Build Computers

Technorati tags: ,

If you don't like long fairy tales, skip ahead to the conclusions!

Once upon a time, there were three little pigs who set out to become building contractors.

One insisted everyone build their house on his land using only his materials, and charged too much money.  He didn't sell that many houses, and this story is not about that little pig.

One believed that people should build their own houses, and that houses should be built for free.  Many people were very interested in this, and often started building such houses, but found it too difficult and gave up, and this story is not about that little pig either.

Pig Makes Good: The Early Years

All of these pigs grew up in houses made of bricks, but this was a new planet where bricks weren't available (I did say "set out", didn't I?), so they had to make do with other materials instead.  At first, they made houses out of these materials the way their brick houses were made back home, but there were so many people wanting houses in the same place that they started joining them together in various ways.

The main pig became very successful, not only making houses for nearly everyone on the planet, but employing lots of builders to do so; soon, there wasn't a single builder who knew everything needed to build a complete house.

As more people came to the new planet, most of the big's earnings came from building hotels and blocks of flats.  They still built lots of houses, but stopped thinking about how those would be made because that wasn't where the money was, and besides, those folks will always buy their houses anyway.

Wolf Atrocities: The Response

After a while, folks started complaining that homeowners were being eaten by wolves, and the quality of houses came into question.  Wolves will be wolves, it was agreed, but surely the idea of a house is to protect one from them?

Some folks suggested building houses out of sticks instead of straw, but the pig said "we have too many pre-built walls that we already made out of straw; it would take far too long to re-do everything in sticks".

Others felt that building out of straw vs. sticks didn't matter too much, as long as you did something about the open windows and weak door hinges. 

The pig said "if you want stronger doors, speak to the Door Lock team.  What's that about 'hinges'?  We don't have a "Hinge Team", so we can't pass those suggestions anywhere.  I promise we'll make doors with even stronger locks in future! (the squeaky things at the other end of the door will stay the same, of course)"

The pig also said "we've always built with open windows, and other businesses have come to depend on them.  How are folks going to deliver goods and services if they can't climb in through the window?  Why not retire to one of the bedrooms, and lock yourself in?  If you get eaten, it's your fault, because you will insist in walking around in other rooms that aren't meant for you - you are a resident, not a chef or a barman, so you don't belong in the kitchen or living room." 

So the new houses were built with stronger locks and new bedroom doors.  And as folks still needed to eat and go to the bathroom, they'd leave the bedroom doors unlocked and get eaten while in the other rooms.

Conclusions

1.  The past can tell you only so much about the future.

If you focus on large-volume quality data from the real world while designing new products, you will design products that solve yesterday's problems while being wide open to tomorrow's new problems.

To avoid this trap, you need to brainstorm new designs with homeowners from the start, rather than present them with a near-completed beta product where the design is already cast in stone.  You also need to listen to theorists who cannot point to detailed real-world data because what they are talking about does not yet exist in the real world, and pay as much attention to these as you do to the detailed real-world feedback you get on things that already exist.

2.  Straw and sticks will never be bricks.

Know that you're forced to build with weak materials (exploitable code) and design your structures accordingly.  Any functionality may suddenly become a death trap or fire hazard, no matter what it was designed to be; so make sure such things can be amputated or walled off at a moment's notice.

3.  Airliners should not attempt aerobatics.

Know that you are human, and are building with straw and sticks.  Don't build death-defying skyscrapers that pose deliberate and unnecessary risks to homeowners.  In particular, don't build in facilities that allow arbitrary passing wolves to overpower residents in their houses, even if that is appropriate design when you are building hotels owned by wolves.

Risky tricks like DRM, product activation, linking real-time WGA to DoS payloads etc. have no business in teetering edifices built from twigs.

4.  Expose wildcard teams to new ideas.

Microsoft gets better at what they do well, while remaining poor at what they do poorly, or on issues to which they remain oblivious.  Why is this?

Partly this is from over-reliance on rich but historical data, as per my first conclusion in this list.  But it is also because their ability to develop is structured by present resource commitments.  For something they already do, there will be a product team; any idea on how to do that stuff better may reach this team, who will understand what it's about and can swiftly incorporate such feedback.

But if they've never seen the need for something, they will have never formed a team to develop it.  Any feedback on such matters will fall on dry ground; there is literally non-one there to process such material.

5.  Handle unstructured feedback.

Microsoft regularly solicits feedback via surveys etc. but once again, these measure what is measurable, rather than what's important - so the objective of "getting new ideas" remains un-met. 

Yes, it's easier to capture data gathered as responses to radio buttons, checkboxes, ordered pick lists and yes/no questions - but that limits input to what the designers of the survey had thought of already.

At the very least, every survey should end with a generic question such as: "On a scale from X to Y, how well do you think this survey covers what you feel should be surveyed?" followed by a large empty "comments" text box. 

A high dissatisfaction score should warn you that you are digging in the wrong place; the response might be to form a wild-card team and pass the dissatisfied returns to them for assessment of any free-form comments that may give a sense of where you should be digging instead.

6.  The cheapest lunch is where you haven't looked yet.

The saying "whenever I lose something, it's always in the last place I'd think to look for it!" is a truism, because once you find it, you stop looking.  In fact, "lost things" are just the least successful tail of your usual access methods.

You stop looking when you find an answer, but that doesn't mean you have the best answer - there may be better ones if you'd look a bit further.

With a mature product that still has problems, the biggest gains are most likely to be found where no-one's started looking yet - rather than improving existing strengths past the point of diminishing returns.

7.  In the land of the blind, name tags aren't useful.

The Internet is an unbounded mesh of strangers, so identity-based solutions don't apply.  Once you initiate networking, as opposed to generic Internet access (e.g. after you log into a secure site), such solutions may become useful... but even then, only if the user has a template of expectations to match whatever identity has been proven.  Even then, the process is only as robust as the twigs out of which it is built, and the Internet is a very windy place.

For this reason, I'd rate risk management as more important for malware and safety, both out on the web and within the system.  This is the barely-touched area that is most likely to provide your cheapest lunch.

8.  Who are the wolves?

All pigs become wolves in the dark. 

We are the wolves, and so are you.  There's no such thing as a special set of saintly piggies (e.g. "media content providers", "software vendors", etc.) who can be trusted with raw pork.

28 August 2007

Design vs. Code Errors

Technorati tags: ,

When Microsoft finds a code error, it generally fixes this fairly promptly.

In contrast, design errors generally remain unfixed for several generations of products; sometimes years, sometimes decades.  Typically even when addressed, the original design will be defended as "not an error" or "works as designed".

Old ideas that don't fit

As an example of bad design that has persisted from the original Windows 95 through to Vista, consider the in appropriateness of Format on the top layer of the Drive context menu.

The logic is old, and still true; hard drives are disks, and formatting is something you do to disks, therefore etc. 

But around this unchanged truism, other things have changed. 

We now have more things we can do to disks, many of which should be done more often than they are; backup, check for errors, defrag.  Because these are "new" (as at Windows 95), they are tucked several clicks deeper in the UI, e.g. Properties, Tools.

Also, the word "Format" has some to mean different things to users.  In 1985, users would routinely buy blank diskettes that had to be formatted before use, and so the immediate meaning of the word "format" was "to make a disk empty by destroying all existing contents".  In 2007, users store things on USB sticks or optical disks, none of which have to be formatted (unless you use packet writing on RW disks) and the immediate meaning of the word "format" is "to make pretty", as in "auto-format this Word document" and "richly-formatted text".

The goal of software is to abstract the system towards the user's understanding of what they want to do.  In keeping with this, "hard drives" have taken on a different conceptual meaning, away from the system reality of disks, towards an abstracted notion of "where things go".  In particular, modern Windows tends to gloss over paths, directories etc. with conceptual locations such as "the desktop", "documents" etc. and the use of Search to find things vs. formal file system navigation across disks and directories.

New things that break old truths

When a risk doesn't arise due to hard scopes, one doesn't have to consider it.  For example, if you build a house with a mountain as your back wall, you don't have to think about burglar-proofing the back wall.  For example, if your LAN is cable-only in a physically-secured building, you have less worries about intrusion than if you'd added WiFi to the mix.

When a risk doesn't arise because a previous team anticipated and definitively fixed it, future teams may be oblivious to it as a risk.  As Windows is decades old, and few programmers stay at the rock face for decades without being promoted to management or leaving, there's a real risk that today's teams will act as "new brooms", sweeping the platform into old risks.

In many of these cases, the risks were immediately obvious to me:

  • \Autorun.inf processing of hard drive volumes
  • Auto-running macros in "documents"
  • Active content in web pages

In some cases, I missed the risk until the first exploit:

  • Unfamiliar .ext and scripting languages

But it generally takes none to one exploit example for me to get the message, and take steps to wall out that risk.  Alas, Microsoft keeps digging for generations:

Auto-binding File and Print Sharing to DUN in Win9x, the way WiFi has been rolled out, dropping "network client" NT into consumerland as XP, hidden admin shares, exposing LSASS and RPC without firewall protection, encouraging path-agnostic file selection via Search... all of these are examples of changes that increase exposure to old risks, and/or new brooms that undermine definitive solutions as delivered by previous teams. 

For example, the folks who designed DOS were careful to ensure that the type of file would always be immediately visible via the file name extension, limiting code types to .COM, .EXE and .BAT, and they were careful to ensure every file had a unique filespec, so that you'd not "open" the wrong one.

These measures basically solved most malware file-spoofing problems, but subsequent teams hide file name extensions, apply poor file type discipline, dumb "run" vs. "view"/"edit" down to the meaningless "open", act on hidden file type info without checking this matches what the user saw, and encourage searching for files that may pull up the wrong filespec.

Avoiding bad design

How would I prevent bad designs reaching the market, and thus creating an installed vendor/user base that create problems when the design is changed?

  • Keep core safety axioms in mind
  • Maintain old/new team continuity
  • Reassess logic of existing practices
  • Don't force pro-IT mindset on consumers
  • Assume bad intent for any external material
  • Make no assumptions of vendor trustworthiness

The classic safe hex rules...

  • Nothing runs on this system unless I choose to run it
  • I will assess and decide on all content before running it

...seem old and restrictive, but breaking these underlies most malware exploits.

27 August 2007

The Word Is Not The World

We can't use language to describe "the all".

Stated as baldly, this looks rather Zen, doesn't it?

The point being that language goes about defining particulars, i.e. "is this, is not that", and thus chips its way away from "the all".

In number theory terms, it's the difference between infinity and very large; of a limit, and test values that tend towards that limit.

The Waking Hour

17 August 2007

Norton Life Sentence

Technorati tags: ,

This post is about Packard Bell, Norton Antivirus, Norton Internet Security and OEM bundling.  For many readers, those four are all "yuk" items already...

Formal maintenance

Every "WTF" (i.e. ill-defined complaints, or just in an unknown state) PC that comes in, gets the formal treatment; 24 hours of MemTest86 with substituted boot CDR to detect spontaneous reboots, Bart boot HD Tune, and Bart booted formal malware scans.

This laptop cannot perform the RAM test because it keeps switching itself off, presumably because it is "idle" (no keyboard, HD, CD, LAN, mouse etc. interrupts).  CMOS Setup shows no facility to manage such behavior, which is disabled in Windows already.  Strike 1, Packard Bell.

The hard drive and file system are fine, and absolutely no malware at all were found on multiple formal av and anti-"spyware" scans, nor in four anti-"spyware" scans done in Safe Cmd.  Spybot did note that the three Windows Security Center alerts were overridden, and this was later confirmed to be a Norton effect.

Specs and software

This is a fairly high-spec laptop; Mobile Celeron at 1.5GHz, 1G RAM (!), XP Pro SP2, but puny 45G hard drive with 4G stolen for the OEM's "special backup" material.  The date stamp on the Windows base directory is 17 March 2006, which matches that of the SVI and "Program Files" directories too.

It has Norton Internet Security 7.0.6.17 OEM(90) and Norton Antivirus 2004 10.0.1.13 OEM(90).  That's from the Help in these products; the same Help describes using Add/Remove to uninstall them. 

Attempted uninstall

Both programs are definitely present and running; in fact, one gets nags every few minutes about antivirus being out of date, and firewall being disabled.  A check confirms both to be true; neither Norton nor XP firewall is enabled, and Norton's subscription has expired.

However, Add/Remove shows no Norton entries other than Live Update.  In fact, the expected slew of OEM bundleware are not there.  A lethally-ancient Sun Java JRE 1.4.xx was found and uninstalled.

Start Menu shows an "Internet and security" flyout with icons for Norton Antivirus and Internet Security.  No icons to uninstall these products from there.

What I did find in a "Packard Bell Support" Start Menu flyout, was a Smart Restore center, from which bundleware could be highlighted and installed or uninstalled.  There was an alert to disable Norton's protection before doing this (more on that later), but either way, clicking Uninstall did nothing (no visible UI effect) and clicking OK after that, appeared to install Norton 2004 again.

To be continued...

15 August 2007

Duplicate User Accounts

Technorati tags:

On Sat, 11 Aug 2007 20:58:01 -0700, SteveS

>My laptop is from Fujitsu and it came with OmniPass software (the
>fingerprint scanner software to log in).  I saw other postings elsewhere
>about it duplicating users on the login screen.  I uninstalled the software,
>rebooted - problem fixed (no more duplicate users).  I reinstalled the
>software, rebooted, the duplicate users did not show up.  I think it stems
>from the upgrade I did from Home Premium to Ultimate and had that software
>installed. 

Yes; any "repair install" of XP will prompt you to create new user accounts even though you already have user accounts, and registry settings that clearly indicate these accounts are in use.

If you then enter the same name(s) as existing accounts, then new accounts are created with the same name.

Vista may avoid this conundrum, but fall into others.


Behind the scenes, the real names are not the same, because the real names are something quite different to what Windows shows you.  Messy, but key to the ability of preserving continuity while allowing you to change the account name after it's created.

Specifically, you encounter not one, nor two, but three name sets:
  - the "real" unique identifier, of the form S-n-n-nn-nnn...
  - the name of the account base folder in Users or D&S
  - the name as seen at logon on when managing users

In the case of account duplication in XP, you will have:
  - unique and unrelated S-n-n-nn-nnnn... identifiers
  - old Name and new Name.PCName account folders
  - the same name at logon and account management

The risks of deleting the wrong material should be obvious.

Public Conversations

Malware: Avoid, Clean, or Rebuild?

Technorati tags:

On Sun, 12 Aug 2007 09:58:03 -0700, MrSlartybartfast

>Yes, creating an image of a hard drive which has malware would include the
>malware in the image.  When copying this image back to the hard drive, the
>malware would also be copied back resulting in net gain of zero.

This is why "just backup!" (as glibly stated) is as useless as "just don't get viruses!" or "if you get infected, clean the virus!" etc.

All of these approaches work, but have complexity within them that make for YMMV results.  The complexity is similar across all three contexts; how one scopes out the bad guys.  The mechanics of meeting that inescapable challenge vary between the three "solutions".

>When I reinstall Windows, I reinstall off the original DVD which has
>no malware, unless you call Windows itself malware :)

This is using time as the great X-axis, i.e. the OS code base is as old as possible, therefore excludes the malware.  And so, the PC is known to be clean.

But it also lacks every code patch needed to keep it that way, in the face of direct exploits a la Lovesan or Sasser etc. and to patch those, you'd have to expose this unpatched PC to the Internet.

It's also bereft of any applications and data.  Presumably once can do the same with applications and drivers as with the OS; install known-good baseline code from CDs and then patch these online, or re-download apps and drivers from the 'net.

There's also no data, and another cruch comes here, because you probably don't want a data set that's certain to be too old to be infected; you want your most recent backup, which is the one most likely to be malware-tainted.  How to scope data from malware?

Even though MS pushes "just" wipe and rebuild as the malware panacea, they undermine these poiunts of failure:
  - they generally don't ship replacement code on CDs or DVDs
  - they don't attempt to separate data, code and incoming material

The first has improved, what with XP SP2 being released as a CD, and with XP SP2 defaulting to firewall on.  

There's little or no progess on the second, though; still no clearly visible distinction between data and code, still no type discipline so malware can sprawl across file types and spoof the user and OS into trusting these, incoming material is still hidden in mail stores and mixed with "documents" etc. 

In Vista, just what is backed up and what is not is even more opaque, as there's little or no scoping by location at all.

>If the malware is on drive D:\ then it possibly could be reactivated on to
>drive C:\.  You normally need to access the files on D:\ to reactivate the
>malware.

For values of "you" that includes the OS as a player.  Even with a wipe-and-rebuild that ensures no registry pointers to code on D:, there can still be code autorun from D: via Desktop.ini, \Autorun.inf, or the exploitation of any internal surfaces.

Such surfaces may present themselves to the material:
  - when you do nothing at all, e.g. indexers, thumbnailers etc.
  - when you "list" files in "folders"
  - when a file name is displayed

>No antivirus is perfect either, antivirus programs can often miss finding
>some malware.  I tend to find antivirus programs clunky and annoying and
>prefer not to use them.

I use them, as I think most users do.  If you "don't need" an av, then clearly you have solved the "don't get viruses" problem, and the contexts of "clean the virus" and "rebuild and restore data" don't arise.  If they do arise, you were wong in thinking "don't get viruses" was solved, and maybe you should rethink "I don't need an av" (while I do agree that av will miss things).

Your nice freshly-built PC has no av, or an av installed from CD that has an update status far worse than whatever was in effect when you were infected.  To update the av, you have to take this clean, unpatched, un-protected-by-av system online...

>On my D:\ I compress my files individually which makes it hard for malware
>to emerge. 

That helps.  It also helps in av can traverse this compression for the on-demand scans you'd want to do between rebuilding C: and installing and updating av, and doing anythiing on D: or restoring "data".

>It is a painful process and takes a few hours so I do not do this very often.

I should hope not; it's "last resort".  If you have no confidence in the ability to detect or avoid malware, do you do this just when convenient, or whenever you "think you might be infected", or do you do it every X days so attackers have "only" X days in which they can harvest whatever they can grab off your PC?

>I  do find this much easier than trying to live with an antivirus
>program installed.  My choice is not for everyone

It might have been a best-fit in the DOS era, when "don't get viruses" was as easy as "boot C: before A: and don't run .EXE, .COM and .BAT files".  By now, a single resident av poses little or no system impact, whereas the wipe-and-rebuild process is a PITA.

Frankly, doing a wipe-and-rebuild every now and then on a PC that's probably clean anyway, will increase the risks of infection.

Do the maths; you either get infected so often that the risks of falling back to unpatched code hardly makes things worse, in which case whatever you (blindly) do is equally useless, or your approach works so well that falling back to unpatched code is your single biggest risk of infection, and to improve things, you should stop doing that.  If you have no ability to tell whether you are or have ever been infected, you can't distingusish between these states.

>as I said before I have no valuable information stored on
>my PC, I do not own a credit card and do not use internet
>banking.  If I have malware then I can live with it.

Most of us want better results than that, and generally attain them.

Why are we reading this advice again?

>The AUMHA forum you linked to as a recommendation for Nanoscan and Totalscan
>does nothing for me, it is hardly a review.  Panda Software is well known, so
>this is not one of the fake virus scans which is on the web.  Out of
>curiosity I started to run it anyway, I did not continue since I do not yet
>fully understand the software and am not prepared to install the files on my
>PC.  You may use this if you wish but it is not for me.

I agree with you there, especially if you suspect the PC is infected.  How do you know the site you reached, is not a malware look-alike that resident malware has spoofed you to?  Is it really a good idea to...
  - disable resident av
  - run Internet Explorer in admin mode so as to drop protection
  - say "yes" to all ActiveX etc. prompts
  - allow the site to drop and run code
  - stay online while this code "scans" all your files
...as the advice at such sites generally suggests?

>The bots which harvest email addresses off the internet are just that, bots.
> They scour the entire internet, not just microsoft newsgroups.  To be safe,
>never use your real name, never give your address, phone number or contact
>details, create temporary email accounts to use to sign up to forums and
>newsgroups,

Bots are unbounded, because:
  - they can update themselves
  - they facilitate unbounded interaction from external entities

Those external entities may be other bots or humans.  In essence, an active bot dissolves confidence in the distinction between "this system" and "the Internet" (or more more accurately, "the infosphere", as local attacks via WiFi may also be facilitated).

Public Conversations

14 August 2007

New User Account Duhfaults

From...

http://www.spywarepoint.com/forums/t26963-p7-microsoft-zero-day-security-holes-being-exploited.html

On Thu, 28 Sep 2006 21:24:32 -0600, Dan wrote:
>cquirke (MVP Windows shell/user) wrote:


>> Defense in depth means planning for how you get your system back; you
>> don't just faint in shock and horror that you're owned, and destroy
>> the whole system as the only way to kill the invader.


>> It's absolutely pathetic to have to tell posters "well, maybe you have
>> 'difficult' (i.e., compitently-written) malware; there's nothing you
>> can do, 'just' wipe and re-install" because our toolkit is bare.


>The school computers (XP Pro. ones -- the school also has 98SE
>computers) where I work were all configured by someone who did
>not know what they were doing. They are have the remote assistance
>boxes checked and that is like saying to everyone "come on in to this
>machine and welcome to the party" This setting is just asking for
>trouble and yet the person or people who originally set up these
>machines configured them in this manner.


All your setup dudes did wrong was to install the OS while leaving MS duhfaults in place. By duhfault, XP will:
- full-share everything on all HDs to networks (Pro, non-null pwds)
- perform no "strength tests" on account passwords (see above)
- disallow Recovery Console from accessing HDs other than C:
- disallow Recovery Console from copying files off C:
- wave numerous services e.g. RPC, LSASS at the Internet
- do so with no firewall protection (fixed in SP2)
- allow software to disable firewall
- automatically restart on all system errors, even during boot
- automatically restart on RPC service failures
- hide files, file name extensions and full directory paths
- always apply the above lethal defaults in Safe Mode
- facilitate multiple integration points into Safe Mode
- allow dangerous file types (.EXE, etc.) to set their own icons
- allow hidden content to override visible file type cues
- dump incoming messenger attachments in your data set
- dump IE downloads in your data set
- autorun code on CDs, DVDs, USB storage and HD volumes
- allow Remote Desktop and Remote Assistance through firewall
- allow unsecured WiFi
- automatically join previously-accepted WiFi networks
- waste huge space on per-user basis for IE cache
- duplicate most of the above on a per-account basis
- provide no way to override defaults in new account prototype

Every time one "just" reinstalls Windows (especially, but not always only, if one formats and starts over), many or all of the above settings will fall back to default again. Couple that with a loss of patches, and you can see why folks who "just" format and re-install, end up repeating this process on a regular basis.

Also, every time a new user account is created, all per-account settings start off with MS defaults and you have to re-apply your settings all over again. If you limit the account rights, as we are urged to do, then often these settings lip back to MS defaults and remain there - so I avoid multiple and limited user accounts altogether, and prefer to impose my own safety settings.

>-- Risk Management is the clue that asks:

"Why do I keep open buckets of petrol next to all the
ashtrays in the lounge, when I don't even have a car?"
>----------------------- ------ ---- --- -- - - - -

Public Conversations

Free Users Need Control!

Technorati tags: , , ,

From...

http://www.spywarepoint.com/forums/t26963-p7-microsoft-zero-day-security-holes-being-exploited.html

On Tue, 26 Sep 2006 07:46:22 -0400, "karl levinson, mvp"

>All operating systems do that. They are designed to launch code at boot
>time by reading registry values, text files, etc. Because those registry
>values are protected from unauthorized access by permissions, someone would
>have to already own your system to modify those values, wouldn't they?


Sure, but the wrong entities come to own systems all the time. Defense in depth means planning for how you get your system back; you don't just faint in shock and horror that you're owned, and destroy the whole system as the only way to kill the invader.

It's tougher for pro-IT, because they've long been tempted into breaking the rule about never letting anything trump the user at the keyboard. By now, they need remote access and admin, as well as automation that can be slid past the user who is not supposed to have the power to block it, in terms of the business structure.

But the rest of us don't have to be crippled by pro-IT's addiction to central and remote administration, any more than a peacetime urban motorist needs an 88mm cannon in a roof-top turret. We need to be empowered to physically get into our systems, and identify and rip out every automated or remotely-intruded PoS that's got into the system.

It's absolutely pathetic to have to tell posters "well, maybe you have 'difficult' (i.e., compitently-written) malware; there's nothing you can do, 'just' wipe and re-install" because our toolkit is bare.

Public Conversations

On User Rights, Safe Mode etc.

Edited for spelling; from...

http://www.spywarepoint.com/forums/t26963-p8-microsoft-zero-day-security-holes-being-exploited.html

On Fri, 29 Sep 2006 23:17:02 -0400, "Karl Levinson, mvp"
>"cquirke (MVP Windows shell/user)" wrote in


>>>All operating systems do that. They are designed to launch code at boot
>>>time by reading registry values, text files, etc. Because those registry
>>>values are protected from unauthorized access by permissions, someone
>>>would have to already own your system to modify those values, wouldn't they?


The weakness here is that anything that runs during the user's session is deemed to have been run with the user's intent, and gets the same rights as the user. This is an inappropriate assumption when there are so many by-design opportunities for code to run automatically, whether the user intended to do so or not.

>> Sure, but the wrong entities come to own systems all the time.


>My point is that this one example here doesn't seem to be a vulnerability if
>it requires another vulnerability in order to use it.


Many vulnerabilities fall into that category, often because the extra requirement was originally seen as sufficient mitigation.  Vulnerabilities don't have to facilitate primary entry to be significant; they may escalate access after entry, or allow the active malware state to persist across Windows sessions, etc.

>This isn't a case of combining two vulnerabilities to compromise a
>system; it's a case of one unnamed vulnerability being used to
>compromise a system, and then the attacker performs some other
>action, specifically changing registry values.


>If this is a vulnerability, then the ability of Administrators to create new
>user accounts, change passwords etc. would also be a vulnerability.


OK, now I'm with you, and I agree with you up to a point. I dunno where the earlier poster got the notion that Winlogin was there to act as his "ace in the hole" for controlling malware, as was implied.

>> Defense in depth means planning for how you get your system back; you
>> don't just faint in shock and horror that you're owned, and destroy
>> the whole system as the only way to kill the invader.


>That's a different issue than the one we were discussing. The statement
>was, winlogon using registry values to execute code at boot time is a
>vulnerability. I'm arguing that it is not.


I agree with you that it is not - the problem is the difficulty that the user faces when trying to regain control over malware that is using Winlogin and similar integration points.

The safety defect is that:
- these integration points are also effective in Safe Mode
- there is no maintenance OS from which they can be managed

We're told we don't need a HD-independent mOS because we have Safe Mode, ignoring the possibility that Safe Mode's core code may itself be infected. Playing along with that assertion, we'd expect Safe Mode to disable any 3rd-party integration, and would provide a UI through which these integration points can be managed.

But this is not the case - the safety defect is that once software is permitted to run on the system, the user lacks the tools to regain control from that software. Couple that with the Windows propensity to auto-run material either be design or via defects, and you have what is one of the most common PC management crises around.

>Besides, it's a relatively accepted truism that once an attacker has root,
>system or administrator privileges on any OS, it is fairly futile to try to
>restrict what actions s/he can perform. Anything a good administrator can
>do, a bad administrator can undo.


That's a safety flaw right there.

You're prolly thinking from the pro-IT perspective, where users are literally wage-slaves - the PC is owned by someone else, the time the user spends on the PC is owned by someone else, and that someone else expects to override user control over the system.

So we have the notion of "administrators" vs. "users". Then you'd need a single administrator to be able to manage multiple PCs without having to actually waddle over to all those keyboards - so you design in backdoors to facilitate administration via the network.

Which is fine - in the un-free world of mass business computing.

But the home user owns their PCs, and there is no-one else who should have the right to usurp that control. (Even) creditors and police do not have the right to break in, search, or seize within the user's home.

So what happens when an OS designed for wage-slavery is dropped into free homes as-is? Who is the notional "administrator"? Why is the Internet treated as if it were a closed and professionally-secured network? There's no "good administrators" and "bad administrators" here; just the person at the keyboard who should have full control over the system, and other nebulous entities on the Internet who should have zero control over the system.

Whatever some automated process or network visitation has done to a system, the home user at the keyboard should be able to undo.

Windows XP Home is simply not designed for free users to assert their rights of ownership, and that's a problem deeper than bits and bytes.

Public Conversations

On Win9x, SR, mOS II, etc.

Technorati tags: , , ,

Lifted from ...

http://www.spywarepoint.com/forums/t26963-p9-microsoft-zero-day-security-holes-being-exploited.html

On Sun, 01 Oct 2006 20:45:23 -0600, "Dan W." <spamyou@user.nec> wrote:
>karl levinson, mvp wrote:
>> "Dan W." <spamyou@user.nec> wrote in message


>> Fewer vulnerabilities are being reported for Windows 98 because Windows 98
>> is old and less commonly used, and vulns found for it get you less fame


More to the point is that vulnerable surfaces are less-often exposed to clickless attack - that's really what makes Win9x safer.

You can use an email app that displays only message text, without any inline content such as graphics etc. so that JPG and WMF exploit surfaces are less exposed. Couple that with an OS that doesn't wave RPC, LSASS etc. at the 'net and doesn't grope material underfoot (indexing) or when folders are viewed ("View As Web Page" and other metadata handlers) and you're getting somewhere.

For those who cannot subscribe to the "keep getting those patches, folks!" model, the above makes a lot of sense.

>> Didn't XP expand on and improve the system restore feature to a level not
>> currently in 98 or ME?


There's no SR in Win98, tho that was prolly when the first 3rd-party SR-like utilities started to appear. I remember two of these that seemed to inform WinME-era SR design.

No-one seemed that interested in adding these utilities, yet when the same functionality was built into WinME, it was touted as reason to switch to 'ME, and when this functionality fell over, users were often advised to "just" re-install to regain it. I doubt if we'd have advised users to "just" re-install the OS so that some 3rd-party add-on could work again.

XP's SR certainly is massively improved over WinME - and there's so little in common between them that it's rare one can offer SR management or tshooting advice that applies to both OSs equally.


I use SR in XP, and kill it at birth in WinME - that's the size of the difference, though a one-lunger (one big doomed C: installation may find the downsides of WinME's SR to less of an issue.

>>> about Microsoft and its early days to present time. The early Microsoft
>>> software engineers nicknamed it the Not There code since it did not have
>>> the type of maintenance operating system that Chris Quirke, MVP fondly
>>> talks about in regards to 98 Second Edition.


>> If the MOS being discussed for Win 98 is the system boot disk floppy, that
>> was a very basic MOS and it still works on Windows XP just as well as it
>> ever did on Windows 98. [Sure, you either have to format your disk as FAT,
>> or use a third party DOS NTFS driver.]


That was true, until we crossed the 137G limit (where DOS mode is no longer safe). It's a major reason why I still avoid NTFS... Bart works so well as a mOS for malware management that I seldom use DOS mode for that in XP systems, but data recovery and manual file system maintenance remain seriously limited for NTFS.

>> I think Chris really wants not that kind of MOS but a much bigger and
>> better one that has never existed.


Well, ever onward and all that ;-)

Bart is a bigger and better mOS, though it depends on how you build it (and yes, the effort of building it is larger than for DOS mode solutions). You can build a mOS from Bart that breaks various mOS safety rules (e.g. falls through to boot HD on unattended reset, automatically writes to HD, uses Explorer as shell and thus opens the risk of malware exploiting its surfaces, etc.).

I'm hoping MS WinPE 2.0, or the subset of this that is built into the Vista installation DVD, will match what Bart offers. Initial testing suggests it has the potential, though some mOS safety rules have been broken (e.g. fall-through to HD boot, requires visible Vista installation to work, etc.).

The RAM testing component is nice but breaks so many mOS safety rules so badly that I consider it unfit for use:
- spontaneous reset will reboot the HD
- HD is examined for Vista installation before you reach the test
- a large amount of UI code required to reach the test
- test drops the RAM tester on HD for next boot (!!)
- test logs results to the HD (!!)
- you have to boot full Vista off HD to see the results (!!!)

What this screams to me, is that MS still doesn't "get" what a mOS is, or how it should be designed. I can understand this, as MS WinPE was originally intended purely for setting up brand-new, presumed-good hardware with a fresh (destructive) OS installation.

By default, the RAM test does only one or a few passes; it takes under an hour or so - and thus is only going to detect pretty grossly-bad RAM. Grossly bad RAM is unlikely to run an entire GUI reliably, and can bit-lip any address to the wrong one, or any "read HD" call to a "write HD" call. The more code you run, the higher the risk of data corruption, and NO writes to HD should ever be done while the RAM is suspected to be bad (which is after all why we are testing it.

A mOS boot should never automatically chain to HD boot after a time out, because the reason you'd be using a mOS in the first place is because you daren't boot the HD. So when the mOS disk boots, the only safe thing to do is quickly reach a menu via a minimum of code, and stop there, with no-time-out fall-through.

It's tempting to fall-through to the RAM test as the only safe option, but that can undermine unattended RAM testing - if the system spontaneously resets during such testing, you need to know that, and it's not obvious if the reboot restarts the RAM test again.

Until RAM, physical HD and logical file system are known to be safe, and it's known that deleted material is not needed to be recovered, it is not safe to write to any HD. That means no page file, no swap, and no "drop and reboot" methods of restarting particular tests.

Until the HD's contents are known to be malware-free, it is unsafe to run any code off the HD. This goes beyond not booting the HD, or looking for drivers on the HD; it also means not automatically groping material there (e.g. when listing files in a folder) as doing so opens up internal surfaces of the mOS to exploitation risks.


Karl's right, tho... I'm already thinking beyond regaining what we lost when hardware (> 137G, USB, etc.) and NTFS broke the ability to use DOS mode as a mOS, to what a purpose-built mOS could offer.

For example, it could contain a generic file and redirected-registry scanning engine into which av vendor's scanning modules could be plugged. It could offer a single UI to manage these (i.e. "scan all files", "don't automatically clean" etc.) and could collate the results into a single log. It could improve efficiency by applying each engine in turn to material that is read once, rather than the norm of having each av scanner pull up the material to scan.

MS could be accused of foreclosing opportunities to av vendors (blocking kernel access, competing One Care and Defender products), but this sort of mOS design could open up new opportunities.

Normally, the av market is "dead man's shoes"; a system can have only one resident scanner, so the race is on to be that scanner (e.g. OEM bundling deals that reduce per-license revenue). Once users have an av, it becomes very difficult to get them to switch - they can't try out an alternate av without uninstalling what they have, and no-one wants to do that. It's only when feeware av "dies" at the end of a subscription period, that the user will consider a switch.

But a multi-av mOS allows av vendors to have their engines compared, at a fairly low development cost. They don't have to create any UI at all, because the mOS does that; all they have to do is provide a pure detection and cleaning engine, which is their core compitency anyway.

Chances are, some av vendors would prefer to avoid that challenge :-)

>> XP also comes with a number of restore features such as Recovery
>> Console and the Install CD Repair features.


They are good few-trick ponies, but they do not constitute a mOS. They can't run arbitrary apps, so they aren't an OS, and if they aren't an OS, then by definition that aren't a mOS either.

As it is, RC is crippled as a "recovery" environment, because it can't access anything other than C: and can't write to anywhere else. Even before you realise you'd have to copy files off one at a time (no wildcards, no subtree copy), this kills any data recovery prospects.

At best, RC and OS installation options can be considered "vendor support obligation" tools, i.e. they assist MS in getting MS's products working again. Your data is completely irrelevant.

It gets worse; MS accepts crippled OEM OS licensing as being "Genuine" (i.e. MS got paid) even if they provide NONE of that functionality.

The driver's not even in the car, let alone asleep at the wheel :-(

>> I never use those or find them very useful for security, but they're
>> way more functional and closer to an MOS than the Win98 recovery
>> floppy or anything Win98 ever had. 98 never had a registry
>> editor or a way to modify services like the XP Recovery Console.


They do different things.

RC and installation options can regain bootability and OS functionality, and if you have enabled Set commands before the crisis you are trying to manage, you can copy off files one at a time. They are limited to that, as no additional programs can be run.

In contrast, a Win98EBD is an OS, and can run other programs from diskette, RAM disk or CDR. Such programs include Regedit (non-interactive, i.e. import/export .REG only), Scandisk (interactive file system repair, which NTFS still lacks), Odi's LFN tools (copy off files in bulk, preserving LFNs), Disk Edit (manually repair or re-create file system structure) and run a number of av.

So while XP's tools are bound to getting XP running again, Win98EBD functionality encompasses data recovery, malware cleanup, and hardware diagnostics. It's a no-brainer as to which I'd want (both!)

>>> that at the bare bones level the source code of 9x is more secure


>> It depends on what you consider security.


That's the point I keep trying to make - what Dan refers to is what I'd call "safety", whereas what Karl's referring to is what I'd call "security". Security rests on safety, because the benefit of restricting access to the right users is undermined if what happens is not limited to what these users intended to happen.

>> Win98 was always crashing and unstable,


Er... no, not really. That hasn't been my mileage with any Win9x, compared to Win3.yuk - and as usual, YMMV based on what your hardware standards are, and how you set up the system. I do find XP more stable, as I'd expect, given NT's greater protection for hardware.

>> because there was no protection of memory space from bad apps or
>> bad attackers.


Mmmh... AFAIK, that sort of protection has been there since Win3.1 at least (specifically, the "386 Enhanced" mode of Win3.x). Even DOS used different memory segments for code and data, though it didn't use 386 design to police this separation.

IOW, the promise that "an app can crash, and all that happens is that app is terminated, the rest of the OS keeps running!" has been made for every version of Windows since Win3.x - it's just that the reality always falls short of the promise. It still does, though it gets a little closer every time.

If anything, there seems to be a back-track on the concept of data vs. code separation, and this may be a consequence of the Object-Orientated model. Before, you'd load some monolithic program into its code segment, which would then load data into a separate data segment. Now you have multiple objects, each of which can contain thier own variables (properties) and code (methods).

We're running after the horse by band-aiding CPU-based No-Execute trapping, so that when (not if) our current software design allows "data" to spew over into code space, we can catch it.

>> Microsoft's security problems have largely been because of backwards
>> compatibility with Windows 9x, DOS and Windows NT 4.0. They feel, and I
>> agree, that Microsoft security would be a lot better if they could abandon
>> that backwards compatibility with very old niche software, as they have been
>> doing gradually.


The real millstone was Win3.yuk (think heaps, co-operative multitasking). Ironically, DOS apps multitask better than Win16 ones, as each DOS app lives in its own VM and is pre-emptively multi-tasked.

64-bit is the opportunity to make new rules, as Vista is doing (e.g. no intrusions into kernel allowed). I'm hoping that this will be as beneficial as hardware virtualization was for NT.

Win9x apps don't cast as much of a shadow, as after all, Win9x's native application code was to be the same as NT's. What is a challenge is getting vendors to conform to reduced user rights, as up until XP, they could simply ignore this.

There's also the burden of legacy integration points, from Autoexec.bat through Win.ini through the various fads and fashions of Win9x and NT and beyond. There's something seriously wrong if MS is unable to enumerate every single integration point, and provide a super-MSConfig to manage them all from a single UI.

>Classic Edition could be completely compatible with the older software
>such as Windows 3.1 programs and DOS programs. Heck, Microsoft
>could do this in a heartbeat without too much trouble.


Think about that. Who sits in exactly the same job for 12 years?

All the coders who actually made Win95, aren't front-line coders at MS anymore. They've either left, or they've climbed the ladder into other types of job, such as division managers, software architects etc. To the folks who are currently front-line coders, making Vista etc., Win9x is as alien as (say) Linux or OS/2.

To build a new Win9x, MS would have to re-train a number of new coders, which would take ages, and then they'd have to keep this skills pool alive as long as the new Win9x were in use. I don't see them wanting to do that, especially as they had such a battle to sunset Win9x and move everyone over to NT (XP) in the first place.

Also, think about what you want from Win9x - you may find that what you really want is a set of attributes that are not inherently unique to Win9x at all, and which may be present in (say) embedded XP.


If you really do need the ability to run DOS and Win3.yuk apps, then you'd be better served by an emulator for these OSs.

This not only protects the rest of the system to the oddball activities of these platforms, but can also virtualize incompatible hardware and mimic the expected slower clock speeds more smoothly than direct execution could offer. This is important, as unexpected speed and disparity between instruction times is as much a reason for old software to fail on new systems as changes within Windows itself.

>I will do what it takes to see this come to reality.


Stick around on this, even if there's no further Win9x as such. As we can see from MS's first mOS since Win98 and WinME EBDs, there's more to doing this than the ability to write working code - there has to be an understanding of what the code should do in the "real world".
 

13 August 2007

CDRW/DVDRW Primer

It can be a bit confusing figuring out R vs. RW and formal authoring vs. packet writing, but I'll try.  This skips a lot of detail, and attempts to zoom on what you'd need to know if starting on writing CDs or DVDs in 2007...

Here's the executive summary:

  R RW
     
Authored Fine Fine
Packet-written Can't Sucks

R vs. RW disks

R(ecordable) disks are like writing in ink - once you've written, you cannot erase, edit or overwrite.

R(e)W(ritable) disks are like writing in pencil - you can rub out what you want to change, but what you write in there, has to fit between whatever else you have not rubbed out.

Authoring vs. packet writing

The "authoring" process is like setting up a printing press; you first lay out the CD or DVD exactly as you want it, then you splat that onto the disk.  You can fill the whole disk at once, like printing a book (single session), or you can fill the first part and leave the rest blank to add more stuff later, like a printed book that has blank pages where new stuff can be added (start a multisession).

The "packet writing" process is what lets you pretend an RW disk is like a "big diskette".  Material is written to disk in packets, and individual packets can be rubbed out and replaced with new packets, which pretty much mirrors the way magnetic disks are used.  This method is obviously not applicable to R disks.

RW disks can also be authored, but the rules stay the same; you either add extra sessions to a multi-session disk, or you erase the whole disk and author it all over again.

Overwriting

When you overwrite a file in a packet-writing system, you do so by freeing up the packets containing the old file and write the new file into the same and/or other packets.  The free space left over is increased by the size of the old file and reduced by the size of the new, rounded up to a whole number of packets.

When you "overwrite" a file in a multisession (authored) disk, it is like crossing out the old material and writing new material underneath, as one is obliged to do when writing in ink.  The free space drops faster, because the space of the old file cannot be reclaimed and re-used, and because each session has some file system overhead, no matter how small the content.

Standards and tools

There are a number of different standard disk formats, all of which must be formally authored; audio CDs, movie DVDs, CD-ROMs and DVD-ROMs of various flavors.  In contrast, packet-written disk formats may be proprietary, and supported only by the software that created them.

Nero and Easy CD Creator are examples of formal authoring tools, and several media players can also author various media and data formats.

InCD and DirectCD are examples of packet-writing tools, which generally maintain a low profile in the SysTray, popping up only to format newly-discovered blank RW disks.  The rest of the time, they work thier magic behind the scenes, so that Windows Explorer can appear to be able to use RW disks as "big diskettes".

Windows has built-in writer support, but the way it works can embody the worst of both authoring and packet-writing models.  I generally disable this support and use Nero instead.

Flakiness

RW disks and flash drives share a bad characteristic; limited write life.  In order to reduce write traffic to RW disks, packet writing software will hold back and accumulate writes, so these can be written back in one go just before the disk is ejected.

What this means is that packet written disks often get barfed by bad exits, lockups, crashes, and forced disk ejects.  Typically the disk will have no files on it, and no free space.  When this happens, you can either erase the disk and author it, or format the disk for another go at packet writing.  Erasing is faster, while formatting applies only to packet writing (it defines the packets).

I have found that packet writing software has been a common cause of system instability (that often ironically corrupts packet-written disks).  The unreliability, slow formatting, and poor portability across arbitrary systems have all led me to abandon packet writing in favor of formally authoring RW disks. 

Back to Basics

9 August 2007

Evolution vs. Intelligent Design

Evolution vs. Intelligent Design = non-issue.

Evolution does not define why things happen. 

It is a mechanism whereby some things that happen, may come to persist (and others, not). 

Graded belief

Human thinking appears to have at least two weaknesses; an automatic assumption of dualities (e.g. "Microsoft and Google are both large; Microsoft is bad, therefore Google must be good"), and an unwillingness to accept unknowns. 

You can re-state the second as a tighter version of the first, i.e. the singleton assumption, rather than duality.

We don't even have words (in English, at least) to differentiate between degrees of belief, i.e. weak ("all things equal, I think it is more likely that A of A, B, C is the truth") and strong ("iron is a metal") belief. 

And I fairly strongly believe we strongly believe too often, when a weaker degree of certainty would be not only more appropriate, but is a needed component in our quest for Word Peace TM.

For example, religious folks have a fairly high certainty of what will happen after they die - but can we all accept that as they have not yet died, that something slightly less than "you're wrong, so I'll kill you" certainty should apply?

What is evolution?

My understanding of evolution, or Darwinian systems, revolves around the following components:

  • limited life span
  • selection pressure
  • imperfect reproduction

That's what I consider to be a classic evolutionary environment, but you may get variations; e.g. if entities change during the course of their lifetime, do not reproduce, but can die, then you could consider this as a Darwinian system that is ultimately set to run down like a wind-up clock as the number of survivors declines towards zero.

In fact, implicit in that classic model is the notion of reproduction based on a self-definition that does not change during the course of an entity's lifetime (in fact, it defines that entity) but can change when spawning next generation entities.

Evolution is blind

Evolution per se, is devoid of intent.  I don't know whether Darwin stressed this in his original writings, but I weakly believe that he did; yet I often see descriptions of creatures "evolving to survive". 

As I understand it, game theory is a reformulation of evolution that centers on the notion of survival intent.

Evolution is something that happens to things, and doesn't "care" whether those things survive or not.  The "selfish gene" concept is an attempt to frame this inevitable sense of "intent" within Darwinian mechanics; there is no more need to ascribe a survival intent to genes, as there is to the phenotypes they define.

However, evolution doesn't have to be the only player on the stage, and this is what I meant about "evolution vs. intelligent design is a non-issue".

I don't think there's much uncertainty that evolution is at work in the world.  That doesn't weigh for or against other (intelligent design) players in the world, and that's why I consider the question a non-issue.

Intelligent players can apply intent from outside the system (e.g. where an external entity defines entities within a Darwinian environment, or the environment itself, or its selection pressures) or from within the system (e.g. where entities apply intent to designing their own progeny).

Example systems

I consider the following to be Darwinian systems:

  • the biosphere
  • the infosphere
  • human culture, i.e. memetics

One could theorize that evolution is an inevitable consequence of complexity, when subjected to entropy.  Just as a moving car is not fast enough to exhibit significant relativistic effects (so that Newton's laws appear to explain everything), so trivial systems may be insufficiently complex to demonstrate Darwinian behavior.

This is why I'm interested in computers and the infosphere; because they are becoming complex enough to defy determinism. 

Normally, we seek to understand the "real world" by peering down from the top, with insufficient clarity to see the bottom. 

With the infosphere, we have an environment that we understand (and created) from the bottom up; what we cannot "see" is the top level that will arise as complexity evolves.

This creates an opportunity to model the one system within the other.  What was an inscrutable "mind / brain" question, becomes "the mind is the software, the brain is the hardware", perhaps over-extended to "the self is the runtime, the mind is the software, the brain is the firmware, the body is the hardware". 

We can also look at computer viruses as a model for biosphere viruses.  A major "aha!" moment for me was when I searched the Internet for information on the CAP virus, and found a lot of articles that almost made sense, but not quite - until I realized these described biological viruses, and were found because of the common bio-virus term "CAPsule".

Code

Common to my understanding of what constitutes a classic Darwinian system, is the notion of information that defines the entity. 

In the biosphere, this is usually DNA or RNA, a language of 4 unique items grouped into threes to map to the active proteins they define.

In the infosphere, this is binary code of various languages, based on bits that are typically grouped into eights as bytes.

In the meme space, languages are carried via symbol sets that are in turn split into unique characters, which are then clumped into words or sentences.  Some languages contain less information within the character set (e.g. Western alphabets), others more (e.g. the Chinese alphabet, ancient Egyptian hieroglyphics, modern icons and branding marks).

When we create computer code, we are laboriously translating the memetic language of ideas into code that will spawn infosphere entities.  This is not unlike the way a set of chromosomes becomes a chicken, other than that we view the infosphere and meme space as separate Darwinian systems.

The alluring challenge is to translate infosphere code into biosphere code, i.e. to "print DNA", as it were.  One hopes the quality of intent will be sound, by the time this milestone is reached, as in effect, we would be positioned to become our own intelligent creators.

Intelligent design

We know that entities in the infosphere are created by intent from outside the system; as at 2007, we do not believe that new entities arise spontaneously within the system.

We don't know (but may have beliefs about) whether there is intent applied to the biosphere, or whether the biosphere was originally created or shaped by acts of intent.

Conspiracy theorists may point to hidden uber-intenders within the meme space, the creation of which is inherently guided by self-intent.

That which was

Just as folks mistakenly ascribe intent to the mechanics of evolution, so there is a fallacy that all that exists, is all that existed.

But evolution can tell you nothing about what entities one existed, as spawned by mutation or entropic shaping of code.

There is a very dangerous assumption that because you cannot see a surviving entity in the current set, that such entities cannot arise. 

Think of a bio-virus with a fast-death payload a la rabies, plus rapid spread a la the common cold.  The assumption of survival intent leads folks to say stupid things like "but that would kill the host, so the virus wouldn't want that".  Sure, there'd be no survivors in today's entity set, but on the other hand, we know we have some historical bulk extinctions to explain.

We're beginning to see the same complacency on the risks of nuclear war.  We think of humanity as a single chain of upwards development, and therefore are optimistic that "common sense will prevail".  Even nay-sayers that point to our unprecedented ability to destroy ourselves, miss the point in that word "unprecedented".  As Graham Hancock postulated in his Fingerprints of the Gods, we may have been this way before.

This blindness applies to malware within the infosphere as well, and the saying "there's none so blind as those who can see" applies.  If we want authoritative voices on malware, we generally turn to professionals who have been staring at all malware for years, such as the antivirus industry.  These folks may be blinded by what they've seen of all that has been, that they fail to consider all that could be.

The Waking Hour