11 September 2008

Google Chrome - Born Dead?

Technorati tags: ,

Web browsers are serious risk surfaces, so there's always room for a better one - but so far, most new browsers are a lot dumber than the incumbents.

So it was with Apple's Safari, when that was ported to Windows as a beta -it was found to be exploitable within two hours of release.  So it is with Google's Chrome, which should be no surprise as it uses the pre-fixed exploited code from Safari!

By-design safety

Google talk a good talk, with these security features widely quoted:

  • Privacy mode for trackless browsing
  • Each tab runs in its own context, can't crash other tabs
  • Tabs run in a "sandbox", can't attack the rest of the system
  • Updates list of bad sites from Google's servers, to spot phishing scams
  • Web pages can open without browser UI elements (uhh... why is this "secure"?)

My first reaction when I read this was, "wow, Google scooped IE8's feature set", given that IE8 builds IE7's to phishing filter into a more comprehensive updated system, runs tabs in separate processes so they don't crash the whole browser, and Vista runs IE7 and thus IE8 in a safer "protected mode" for well over a year now.  I don't know whether Google's "sandbox" is stronger and safer than IE7-on-Vista's "protected mode", or whether either of these constitute an effective "sandbox".

Then I thought: Hang on, this is a newly-released beta, whereas IE8 has been in beta for a while now and has already been more widely released as beta 2... so who's first to offer these features?

I have to wonder why Google thinks it's a good idea to spawn web content (basically, stuff foisted from the web) as generic stand-alone windows, when we already have so many problems with pop-ups forging system dialog boxes to push fake scanners etc.  Why is it considered a good idea to let sites hide the address bar, when phishing attacks so often use misleading URLs that HTML allows to be covered with arbitrary text, including completely different fake URLs?

Code safety

Google talks about a large sandboxed system to interpret JavaScript, which sounds a bit like the idea behind Java.  Well, we've seen how well that works, given the long list of security updates that Sun have to constantly release to keep up with code exploits - so we'd have to hope Google are really good at crafting safe, non-exploitable code.

So it doesn't bode well, that the public beta they release is based on a known-exploitable code base, which is already being attacked, at a time when patched versions of this code are already being retro-fitted to existing Safari installations. 

Why would Google not build their beta on the fixed code base?  It's Open Source, and already available, why not use it?  Would it have killed them to delay the hitherto-secret web browser beta until they'd adopted the fixed code?  Or is the need to leverage pre-arranged hype etc. more important than shipping known-exploited code to users?  And why does the fixed release still report the exploitable code base version? 

Trust me, I'm a software vendor

How do you feel about vendors who silently push new code into your system and are slow to tell you what it does?  Here's what Google is quoted as saying about that:

"Users do not get a notification when they are updated. When there are security fixes, it's crucial that we update our users as quickly as possible in order to keep them safe. Thus, it's important for us to not require user intervention. There are some security fixes that we'll keep quiet because we don't want to disclose security vulnerabilities to attackers"

To me, that reads like a dangerous combination of Mickey-Mouse attempts at security via obscurity, plus supreme vendor arrogance. 

But wait, there's more...

Further things have come to light when searching for links for this post, such as installing in a "data" location (thus side-stepping Vista's protection for "Program Files") and a rather too-effective search that finds supposedly private things.

"Well, it's a beta", I can hear you say.  That's why it's safely tucked away deeply within Google's developer site, so that only the adventurous and knowledgeable will find it, right?  I mean, it's not as if it's being shoved at everyone via popular or vendor-set web pages so that it's gaining significant market share, is it?

10 September 2008

Compatibility vs. Safety

Technorati tags: ,

Once upon a time, new software was of interest because it had new features or other improvements over previous versions.  This attracted us to new versions, but we still wanted our old stuff to work - so the new versions would often retain old code to stay compatible with what we already had.

Today, we're not so much following the carrot of quality, but fleeing the stick of quality failure.  We are often told we must get a new version because the old version was so badly made, it could be exploited to do all sorts of unwanted things.  In this case, we want to break compatibility so that the old exploit techniques will no longer work!

Yet often the same vendors who drive us to "patch" or "upgrade" their products to avoid exploitation risks, still seem to think we are attracted by features, not driven by fear.

Sun's Java

I've highlighted the long-standing problems with Sun's Java before, and they are still squirming around their promise to mend their ways.  In short, they may still leave old exploitable versions of the Java JRE on your system, but it's no longer quite as easy for malware to select these as their preferred interpreter.  Still, you're probably safer if you uninstall these old JREs (as Sun's Java updater typically does not do) than trust Sun to deny code access to them.

Microsoft's Side By Side

Here's an interesting article on the Windows SxS (Side By Side) facility, which aims to appease software that was written for older versions of system .DLLs and thus ease the pain of "DLL Hell".  This works by retaining old versions of these .DLLs so that older software can specify access to them, via their manifest

How is that different from Sun's accursed practice? 

Well, is generally isn't, as far as I can tell, until a particular exploit situation is recognized where this behaviour poses a risk.  The current crisis du jour involves exploits against GDIPlus.dll - yep the same one that was fixed before - and the patch this time includes a facility to block access to old versions of the .DLL, leveraging a feature already designed into the SxS subsystem.

5 September 2008

The Most Dangerous File Type Is...

Technorati tags: ,

The most dangerous file type is... what?

Well, you pass if you said ".exe", and get bonus marks for ".pif, because it's just as dangerous thanks to poor type discipline, and more so because of poor UI safely that hides what it is".  But today's answer may be neither.

By the time a code file lands up on your system, there's a chance your antivirus will have been updated to know what it is, and may save the attacker's shot at goal.  But a link can point to fresh malware code that's updated on the server side in real time; that's far more likely to be "too new" for av to detect, and once it's running, it can kill or subvert your defences.

We need to apply this realization to the way we evaluate and manage risk, to up-rate the risk posed by whatever can deliver Internet links.  Think "safe" messages without scripts or attachments, and blog comment spam (including the link from the comment poster's name). 

Think also about how HTML allows arbitrary text to overlie a link, including text that looks like the link itself.  This link could obviously go to www.bad.com, but it's less obvious that www.microsoft.com could go there instead.  Then think how HTML is ubiquitously tossed around as a generic "rich text" interchange medium, from email message "text" to .CHM Help files.