Lifted from ...
http://www.spywarepoint.com/forums/t26963-p9-microsoft-zero-day-security-holes-being-exploited.html
On Sun, 01 Oct 2006 20:45:23 -0600, "Dan W." <spamyou@user.nec> wrote:
>karl levinson, mvp wrote:
>> "Dan W." <spamyou@user.nec> wrote in message
>> Fewer vulnerabilities are being reported for Windows 98 because Windows 98
>> is old and less commonly used, and vulns found for it get you less fame
More to the point is that vulnerable surfaces are less-often exposed to clickless attack - that's really what makes Win9x safer.
You can use an email app that displays only message text, without any inline content such as graphics etc. so that JPG and WMF exploit surfaces are less exposed. Couple that with an OS that doesn't wave RPC, LSASS etc. at the 'net and doesn't grope material underfoot (indexing) or when folders are viewed ("View As Web Page" and other metadata handlers) and you're getting somewhere.
For those who cannot subscribe to the "keep getting those patches, folks!" model, the above makes a lot of sense.
>> Didn't XP expand on and improve the system restore feature to a level not
>> currently in 98 or ME?
There's no SR in Win98, tho that was prolly when the first 3rd-party SR-like utilities started to appear. I remember two of these that seemed to inform WinME-era SR design.
No-one seemed that interested in adding these utilities, yet when the same functionality was built into WinME, it was touted as reason to switch to 'ME, and when this functionality fell over, users were often advised to "just" re-install to regain it. I doubt if we'd have advised users to "just" re-install the OS so that some 3rd-party add-on could work again.
XP's SR certainly is massively improved over WinME - and there's so little in common between them that it's rare one can offer SR management or tshooting advice that applies to both OSs equally.
I use SR in XP, and kill it at birth in WinME - that's the size of the difference, though a one-lunger (one big doomed C: installation may find the downsides of WinME's SR to less of an issue.
>>> about Microsoft and its early days to present time. The early Microsoft
>>> software engineers nicknamed it the Not There code since it did not have
>>> the type of maintenance operating system that Chris Quirke, MVP fondly
>>> talks about in regards to 98 Second Edition.
>> If the MOS being discussed for Win 98 is the system boot disk floppy, that
>> was a very basic MOS and it still works on Windows XP just as well as it
>> ever did on Windows 98. [Sure, you either have to format your disk as FAT,
>> or use a third party DOS NTFS driver.]
That was true, until we crossed the 137G limit (where DOS mode is no longer safe). It's a major reason why I still avoid NTFS... Bart works so well as a mOS for malware management that I seldom use DOS mode for that in XP systems, but data recovery and manual file system maintenance remain seriously limited for NTFS.
>> I think Chris really wants not that kind of MOS but a much bigger and
>> better one that has never existed.
Well, ever onward and all that ;-)
Bart is a bigger and better mOS, though it depends on how you build it (and yes, the effort of building it is larger than for DOS mode solutions). You can build a mOS from Bart that breaks various mOS safety rules (e.g. falls through to boot HD on unattended reset, automatically writes to HD, uses Explorer as shell and thus opens the risk of malware exploiting its surfaces, etc.).
I'm hoping MS WinPE 2.0, or the subset of this that is built into the Vista installation DVD, will match what Bart offers. Initial testing suggests it has the potential, though some mOS safety rules have been broken (e.g. fall-through to HD boot, requires visible Vista installation to work, etc.).
The RAM testing component is nice but breaks so many mOS safety rules so badly that I consider it unfit for use:
- spontaneous reset will reboot the HD
- HD is examined for Vista installation before you reach the test
- a large amount of UI code required to reach the test
- test drops the RAM tester on HD for next boot (!!)
- test logs results to the HD (!!)
- you have to boot full Vista off HD to see the results (!!!)
What this screams to me, is that MS still doesn't "get" what a mOS is, or how it should be designed. I can understand this, as MS WinPE was originally intended purely for setting up brand-new, presumed-good hardware with a fresh (destructive) OS installation.
By default, the RAM test does only one or a few passes; it takes under an hour or so - and thus is only going to detect pretty grossly-bad RAM. Grossly bad RAM is unlikely to run an entire GUI reliably, and can bit-lip any address to the wrong one, or any "read HD" call to a "write HD" call. The more code you run, the higher the risk of data corruption, and NO writes to HD should ever be done while the RAM is suspected to be bad (which is after all why we are testing it.
A mOS boot should never automatically chain to HD boot after a time out, because the reason you'd be using a mOS in the first place is because you daren't boot the HD. So when the mOS disk boots, the only safe thing to do is quickly reach a menu via a minimum of code, and stop there, with no-time-out fall-through.
It's tempting to fall-through to the RAM test as the only safe option, but that can undermine unattended RAM testing - if the system spontaneously resets during such testing, you need to know that, and it's not obvious if the reboot restarts the RAM test again.
Until RAM, physical HD and logical file system are known to be safe, and it's known that deleted material is not needed to be recovered, it is not safe to write to any HD. That means no page file, no swap, and no "drop and reboot" methods of restarting particular tests.
Until the HD's contents are known to be malware-free, it is unsafe to run any code off the HD. This goes beyond not booting the HD, or looking for drivers on the HD; it also means not automatically groping material there (e.g. when listing files in a folder) as doing so opens up internal surfaces of the mOS to exploitation risks.
Karl's right, tho... I'm already thinking beyond regaining what we lost when hardware (> 137G, USB, etc.) and NTFS broke the ability to use DOS mode as a mOS, to what a purpose-built mOS could offer.
For example, it could contain a generic file and redirected-registry scanning engine into which av vendor's scanning modules could be plugged. It could offer a single UI to manage these (i.e. "scan all files", "don't automatically clean" etc.) and could collate the results into a single log. It could improve efficiency by applying each engine in turn to material that is read once, rather than the norm of having each av scanner pull up the material to scan.
MS could be accused of foreclosing opportunities to av vendors (blocking kernel access, competing One Care and Defender products), but this sort of mOS design could open up new opportunities.
Normally, the av market is "dead man's shoes"; a system can have only one resident scanner, so the race is on to be that scanner (e.g. OEM bundling deals that reduce per-license revenue). Once users have an av, it becomes very difficult to get them to switch - they can't try out an alternate av without uninstalling what they have, and no-one wants to do that. It's only when feeware av "dies" at the end of a subscription period, that the user will consider a switch.
But a multi-av mOS allows av vendors to have their engines compared, at a fairly low development cost. They don't have to create any UI at all, because the mOS does that; all they have to do is provide a pure detection and cleaning engine, which is their core compitency anyway.
Chances are, some av vendors would prefer to avoid that challenge :-)
>> XP also comes with a number of restore features such as Recovery
>> Console and the Install CD Repair features.
They are good few-trick ponies, but they do not constitute a mOS. They can't run arbitrary apps, so they aren't an OS, and if they aren't an OS, then by definition that aren't a mOS either.
As it is, RC is crippled as a "recovery" environment, because it can't access anything other than C: and can't write to anywhere else. Even before you realise you'd have to copy files off one at a time (no wildcards, no subtree copy), this kills any data recovery prospects.
At best, RC and OS installation options can be considered "vendor support obligation" tools, i.e. they assist MS in getting MS's products working again. Your data is completely irrelevant.
It gets worse; MS accepts crippled OEM OS licensing as being "Genuine" (i.e. MS got paid) even if they provide NONE of that functionality.
The driver's not even in the car, let alone asleep at the wheel :-(
>> I never use those or find them very useful for security, but they're
>> way more functional and closer to an MOS than the Win98 recovery
>> floppy or anything Win98 ever had. 98 never had a registry
>> editor or a way to modify services like the XP Recovery Console.
They do different things.
RC and installation options can regain bootability and OS functionality, and if you have enabled Set commands before the crisis you are trying to manage, you can copy off files one at a time. They are limited to that, as no additional programs can be run.
In contrast, a Win98EBD is an OS, and can run other programs from diskette, RAM disk or CDR. Such programs include Regedit (non-interactive, i.e. import/export .REG only), Scandisk (interactive file system repair, which NTFS still lacks), Odi's LFN tools (copy off files in bulk, preserving LFNs), Disk Edit (manually repair or re-create file system structure) and run a number of av.
So while XP's tools are bound to getting XP running again, Win98EBD functionality encompasses data recovery, malware cleanup, and hardware diagnostics. It's a no-brainer as to which I'd want (both!)
>>> that at the bare bones level the source code of 9x is more secure
>> It depends on what you consider security.
That's the point I keep trying to make - what Dan refers to is what I'd call "safety", whereas what Karl's referring to is what I'd call "security". Security rests on safety, because the benefit of restricting access to the right users is undermined if what happens is not limited to what these users intended to happen.
>> Win98 was always crashing and unstable,
Er... no, not really. That hasn't been my mileage with any Win9x, compared to Win3.yuk - and as usual, YMMV based on what your hardware standards are, and how you set up the system. I do find XP more stable, as I'd expect, given NT's greater protection for hardware.
>> because there was no protection of memory space from bad apps or
>> bad attackers.
Mmmh... AFAIK, that sort of protection has been there since Win3.1 at least (specifically, the "386 Enhanced" mode of Win3.x). Even DOS used different memory segments for code and data, though it didn't use 386 design to police this separation.
IOW, the promise that "an app can crash, and all that happens is that app is terminated, the rest of the OS keeps running!" has been made for every version of Windows since Win3.x - it's just that the reality always falls short of the promise. It still does, though it gets a little closer every time.
If anything, there seems to be a back-track on the concept of data vs. code separation, and this may be a consequence of the Object-Orientated model. Before, you'd load some monolithic program into its code segment, which would then load data into a separate data segment. Now you have multiple objects, each of which can contain thier own variables (properties) and code (methods).
We're running after the horse by band-aiding CPU-based No-Execute trapping, so that when (not if) our current software design allows "data" to spew over into code space, we can catch it.
>> Microsoft's security problems have largely been because of backwards
>> compatibility with Windows 9x, DOS and Windows NT 4.0. They feel, and I
>> agree, that Microsoft security would be a lot better if they could abandon
>> that backwards compatibility with very old niche software, as they have been
>> doing gradually.
The real millstone was Win3.yuk (think heaps, co-operative multitasking). Ironically, DOS apps multitask better than Win16 ones, as each DOS app lives in its own VM and is pre-emptively multi-tasked.
64-bit is the opportunity to make new rules, as Vista is doing (e.g. no intrusions into kernel allowed). I'm hoping that this will be as beneficial as hardware virtualization was for NT.
Win9x apps don't cast as much of a shadow, as after all, Win9x's native application code was to be the same as NT's. What is a challenge is getting vendors to conform to reduced user rights, as up until XP, they could simply ignore this.
There's also the burden of legacy integration points, from Autoexec.bat through Win.ini through the various fads and fashions of Win9x and NT and beyond. There's something seriously wrong if MS is unable to enumerate every single integration point, and provide a super-MSConfig to manage them all from a single UI.
>Classic Edition could be completely compatible with the older software
>such as Windows 3.1 programs and DOS programs. Heck, Microsoft
>could do this in a heartbeat without too much trouble.
Think about that. Who sits in exactly the same job for 12 years?
All the coders who actually made Win95, aren't front-line coders at MS anymore. They've either left, or they've climbed the ladder into other types of job, such as division managers, software architects etc. To the folks who are currently front-line coders, making Vista etc., Win9x is as alien as (say) Linux or OS/2.
To build a new Win9x, MS would have to re-train a number of new coders, which would take ages, and then they'd have to keep this skills pool alive as long as the new Win9x were in use. I don't see them wanting to do that, especially as they had such a battle to sunset Win9x and move everyone over to NT (XP) in the first place.
Also, think about what you want from Win9x - you may find that what you really want is a set of attributes that are not inherently unique to Win9x at all, and which may be present in (say) embedded XP.
If you really do need the ability to run DOS and Win3.yuk apps, then you'd be better served by an emulator for these OSs.
This not only protects the rest of the system to the oddball activities of these platforms, but can also virtualize incompatible hardware and mimic the expected slower clock speeds more smoothly than direct execution could offer. This is important, as unexpected speed and disparity between instruction times is as much a reason for old software to fail on new systems as changes within Windows itself.
>I will do what it takes to see this come to reality.
Stick around on this, even if there's no further Win9x as such. As we can see from MS's first mOS since Win98 and WinME EBDs, there's more to doing this than the ability to write working code - there has to be an understanding of what the code should do in the "real world".
No comments:
Post a Comment