7 September 2010

Driver Cure or Driver Curse?

If you'd just dropped into the PC world last week, you'd think all software was perishable and had to be continuously refreshed. Must always have the latest version BIOS, drivers, etc.


This attitude runs counter to an older wisdom, that the first question when something goes wrong, is: "What changed?" With this in mind, the last thing you want is vendor-driven changes to your code base; in fact, for a critical working system, you want no changes at all.


The logic behind all this is contradictory...



  • Software vendors make mistakes, requiring software repairs ("patches" or "updates")

  • This happens so often, you may not be able to keep up with the constant flow of updates

  • So it's best to let the software vendor push updates whenever they see fit


This boils down to: Trust software vendors to push changes into your code, because they fail that trust so often you can't keep up with the pace of quality repair required.


So, should you always patch, or never patch? Or sometimes patch? If "sometimes", then on what basis do you use to decide what needs patching?


Balancing risks


Some code is so critical, you may consider it too risky to change, e.g. BIOS and device firmware, device drivers, core OS code, and code that is running all the time and can crash the PC if it goes wrong.


Some code is so exposed to arbitrary unsolicited material, you may consider it too risky to leave unpatched, for fear that malware may exploit defects in the code to attack your PC.


Code should never fall into both of the above categories; if it does, you're probably looking at really bad software design. For example, integrating a web browser so deeply into the system that it's indivisible from the system's own UI, would be a bad design decision. Or consider a service so critical to the system's internal functioning that the OS shuts down the whole PC every time the service fails, that is waved at the Internet on the basis it's "networking"; that would be a really bad decision (Lovesan vs. RPC, remember?).


Trust me, I'm a software vendor


The two reasons not to trust a software vendor are incompitence and perfidity. A vendor who claims you "must" leave your system open to a constant stream of fixes, has declared themselves incapable of writing code that can be trusted to work properly.


And frankly, when even "legit" vendors hide deliberately user-hostile code within their products, set to automatically deny you service if its logic considers your license state is invalid (product activation) or distribute rootkits within "audio CDs" (Sony), I'd not trust any vendor's ethics.


Finally, even if you trust the vendor's ethics, you have to look at the mechanics of code distribution. Fakeware abounds, so when a third party claims to serve you fresh code from the vendors you trust, you have to ask yourself how trustworthy is that third party?


You also have to ask why you'd trust a particular software package. Open source advocates would say it's because you can read the source code yourself, or at least feel safer in that others have done this on your behalf. Closed source advocates would say it is unrealistic to read source code yourself, and instead would point to pre-deployment testing that would pick up unwanted behavior before the code was used in the real world.


Patches and updates change both of these equations, because now the code you read and/or tested, is no longer the actual code that is running. Any patch may add unwanted behaviors that favor whoever pushed the patch into your system. For the same reason, you should avoid software that stores "your" settings on the server side rather than on your PC (e.g. Real Player, many Instant Messaging apps) and "Privacy Policies" and End User License "Agreements" that state "these terms can be changed whenever we see fit", as so many do.


The race to patch


There's a race between freshly-released malware, vs. your antivirus scanner that protects your system. When a new malware is found, the antivirus vendor analyses the code to work out how to detect it, then how to safely remove the code, then that logic is packages as an update that your PC's scanner pulls to update your protection.


Compares this to what happens when a new vulnerability is patched. The malware coders can compare pre- and post-patched code to isolate the fix, then work out what the unfixed code did wrong, and thus how to attack that code. The exploit code is then packaged into malware prepared earlier, such as a downloader stub, and that's pushed into the wild.


Notice the similarities between these processes, i.e. recognizing and removing malware compared to extracting and exploiting code defects from studying patches?


If you rely on resident antivirus to protect you, then you are betting on the av vendor to beat the malware in the race. By the same token, you may expect malware coders to be fast enough to exploit your edge-facing code before the patch arrives to fix the defect. Hence the manic rush to patch, for fear of prompt exploit.


It's actually a bit worse than this, for two reasons. Firstly, sometimes it's the malware folks who find and exploit defects before the code vendor learns about these and fixes them. Secondly, software vendors have to ensure their patches don't break any systems, whereas a malware coder just wants it work enough of the time to spread, and doesn't care if it breaks other systems in the process. Less rigorous testing means "faster to market", right?


Self-spreading malware can also spread faster from more systems, and thus beat the patching or updating processes to the punch. Malware can be delivered in real-time via pure network worms, or links to servers that are themselves updated in real time. Often the malware that enters the system is just a downloader stub; it only has to last long enough to pull down the "real" malware, which can replace itself in real time as well.


Edge-facing software


With all this in mind, you can see why one would want to patch edge-facing software as soon as possible. Examples include web browsers, Java, Acrobat Reader, Flash and media players, and anything that is constantly exposed to the outside world, such as software that waits for instant messages or "phone" calls.


The best solution is to remove that edge-facing software, and thus the need to patch it. Do this whenever you don't need that software, when the software or its vendor are too flaky to trust, or when the update process itself is something you want to avoid.


For example, you may catch a vendor trying to shove new edge-facing software as "updates", even when that software is not present and therefore doesn't require patching. That's how Apple used to push Safari to PCs running iTunes or QuickTime, until they were pressurized to stop.


For another example, a vendor may decide you don't need to be asked before updates are pushed, or even told when this has happened. And when you look at that vendor's updater, you find it running as multiple scheduled tasks; then when you look at the details, you find a task that appears to be run once a day, is actually repeatedly run every hour throughout the day. That's the equation with Google, and why I would avoid any edge-facing Google software.


If you can't avoid edge-facing software, then you can protect yourself in two ways; by updating it as soon as possible, and/or by choosing such obscure, small-market-share products that they aren't likely to be attacked. The latter is like living in an unlocked shack in the countryside; that works not because shacks are "so secure", but because there are so few attackers around.


Driver Cure or Driver Curse?


So now we come to Driver Cure, which is a third party product that pulls in the latest versions of your device drivers. Would you want this? I'd say no, for two reasons.


Firstly, device drivers are code that runs so "deep" in the system, that any mistakes are very likely to crash the entire OS, leaving the file system corrupted, data files unsaved, etc. Device drivers usually run all the time, so bad code may prevent the system from being able to boot or run at all, even in Safe Mode. So I definitely don't want unexpected changes to this code, any of which may cause the system to stop working.


Secondly, device drivers are not edge-facing, so the risk of explosure exploit should not be high. That means less reason to patch in haste.


Thirdly, if malware were to be integrated into the system as deeply as a device driver, it would have considerable power and be very hard to remove. So we'd want to know a lot more about third party software that inserts "drivers" into the system.


The "Driver Cure" folks also push XoftSpy, which was one of several hundred fake anti-spyware scanners, until they supposedly "went legit". As such, sites and blogs may no longer call XoftSpy "malware" for fear of being sued; we may instead consider it as a legit antispyware that isn't very good at what it does, and costs money where better products are free.


So, in spite of "reviews" like these, I would avoid Driver Cure and anything else from that particular software vendor or distributor.

4 comments:

Dan W said...

I enjoyed reading your post about drivers and agree with you that it makes sense not to use front end software that you do not need. I was wondering if an example would be to use a *.pdf reader like Foxit Reader compared to Adobe Reader so that you would not be exposed to as many threats since it is not as popular.

Chris Quirke said...

I'm not sure about that; it seems as if the same exploit opportunities may apply to both.

What may help is post-exploit traction failure due to the difference in the exploited code, e.g. what caused XP "Gold"'s exploitable RPC service to crash rather than run the malware code when the Lovesan attack packet was crafted for Win2000 instead of XP.

Gee, it's been a while since "NT-based, so most secure Windows evah!" XP was reduced to steaming rubble in the first year of consumer use, thanks to RPC and then LSASS exploiters.

That was "Network Client, meet the Internet"... light touch paper and stand well back :-)

I see there's been an uptick in Java attacks, and I'm wondering whether the attraction is the opportunity to cross platforms not only to Mac and Linux, but also the growing mass of sub-PC connected devices (smartphones etc.)

Dan W said...

There is a new problem that I would appreciate your research and technical expertise upon the issue. The problem of pushing software updates that are flawed and mess up the underlying product because of a lack of quality assurance testing. I would not be surprised if that has been outsourced from the States to India or China or elsewhere. Possibly it could be a lack of technical expertise or it could just be sloppy work in order to rush out a patch to fix a security vulnerability. Recently, I updated my Samsung Verizon cell phone only to have the official update break my phone. Verizon Wireless is sending me a replacement after going through the standard troubleshooting steps of removing the battery and replacing it and no it is not a Windows 7 phone and I have not dropped the phone once that I can remember. We can talk more by email about this if you want but I would prefer to limit public discussion of the surprising amount of problems that I have come across to prevent FUD. Thank you.

Chris Quirke said...

Well, the logic of software updates is inherently broken. Because vendor incompitence requires patching so often we can't keep up manually, we have to trust them to push fixes automatically.

Now there are two aspects to trust; compitence, and motivation. So even if you trust your vendor's motivation - which is moot, given activation payloads etc. - it boils down to "trust the vendor because you can't trust the vendor".

Now "compitence" (which I may not have even spelt correctly) isn't a matter of bad humans - the reality is that software complexity has grown to the point that even the very best human error rates are going to spawn a lot of errors.

All non-trivial software has bugs; therefore if you want critical code to be bug-free, it must be kept trivial.

I don't think where the coders are living is relevant. Folks may object to outsourcing in terms of "lost jobs", which is really a complaint that the jobs are in the wrong place for them (but the right place for someone else). They may also have trouble with language and communication. But I think the assumption that citizens of X are likely to better coders than citizens of Y is pretty silly.

There are two reasons why one might expect patches to screw up - and I'm surprised this doesn't happen more often.

Firstly, is the time scale; patches often have to be developed in haste, for fear of ITW attack. Folks who publically disclose flaws when the vendor takes "too long" to fix them can carry some blame there.

Secondly, whereas the original code is put in place first, as a standard clean generic state, patches have to be retrofitted into a state mutated by other software installs, AutoChk cleanups, malware attacks and fixes, "registery cleaning" and other aspects of that particular installation's history.

Patching forces a more intimate relationship with the vendor that is often exploited, so one wonders if it really is an inevitability, or a deliberate scam. I think it's the former, given how often open source projects also get patched.