19 August 2006

The Trust Stack

Some readers will be familiar with the OSI network stack model, which helps clarify issues when troubleshooting network or connectivity problems. I propose a similar 7-layer stack for evaluating trustworthiness within Information Technology contexts.

Each layer rests on the layer below it, and cannot be effective if a lower layer fails. Conversely, if the top layer is rotten, then all they layers below are no longer relevant.

Goals

Are the goals of your vendor compatible with your own, or are they contrary to these? The adage "he who pays the piper, calls the tune" applies here; if it is not you who provide the vendor's income stream, then it's not likely to be your needs that are uppermost in the vendor's mind. Even if the vendor derives income from you, the vendor may afford to ignore your needs if you are perceived to have no choice other than to buy their product.

Intention

What is the intention of the specific thing you are evaluating? If it is intended to do something that is contrary to your interests, then at best it can be trusted only to work against your interests in the way intended.

Policy

The vendor may commit itself to policies (e.g. a privacy policy) or may be compelled to act within policies laid down by law. For example, a privacy policy may defines what the vendor would do with your data, were they to be the sole agent with access to it.

Security

This goes about limiting who has what abilities within the system. For example, a privacy policy is meaningless if entities other than the vendor also have access to data held by their system.

Safety

This goes about the level of risk (or range of possible consequences) within the system, and whether this is constrained to user's expectations. It's no use securing access so that only trusted employees can operate the system, if the system takes greater risks than the trusted employees expect when they operate it.

Sanity

This goes about whether the system acts as it was designed to do, or whether defects create opportunities for it to act completely differently. For example, a defective JPG handler can escalate the risk of handling "graphic data" to running raw code; something that bears no resemblance to what the handler was created to do.

Granularity

The above six layers are top-down, though some contexts may make more sense if Policy is considered to run above Intention. The seventh layer is different; it rides next to everything else, and each instance encompases all of the other six layers.

That's because the vendor can open the system out to additional players at every level. There may be co-owners (or successive owners, e.g. after a buy-out) with different goals; different coding teams may have different intentions, different departments or legislation may stipulate different policies, and the actual code may re-use modules developed by different vendors.

In addition to problems within each of these players, problems can arise at the interface between them. In an earlier blog post, I mentioned the rule that "users know less than you think", i.e. no matter how little you expect users to understand about your product, they will understand (or care) even less. This applies not only between end-user and product, but between each coding level and the objects re-used by those coders.

Trusted Computing

Now apply these tests to the concept of "Trusted Computing". The vendor who coined the phrase derives income from us who pay for Windows, but is the monopoly provider of the OS required to run applications written for it. The stated goals and intentions of the OS are to leverage interests of certain business partners over ours via DRM; in fact, "Trusted Computing" initially meant that media corporations could "trust" user's systems to be constrained from violating corporate interests.

So already, we have problems at the top of the trust stack, especially when we look at the track record of previous behavior. We also have a top-level granularity problem; the OS vendor empowers a class of "media providers" to leverage their rights over ours.

How is this class of "media providers" bounded? Free speech requires anyone to be accepted as a provider of content, which means we're expected to trust anyone to have rights that trump our own on our own systems. Or you could constrain these powers to a small cartel of well-resourced corporations, trading freedom of expression for putative trustworthiness of computing. Given that one of the largest media corporations has already been caught dropping rootkits onto PCs from "audio" CDs, I don't have much faith there.

If you look at the problem from the bottom up, it doesn't get better - the raw materials out of which "trusted computing" is to be built, are already so failure-prone as to require regular repairs that are limited to monthly patching for convenience.

Bottom line

We already trust computing, even though evidence proves it's unworthy of trust.

When we allow software to download and apply patches without explicitly reviewing or testing these, we break the best practice of allowing no unauthorised changes to be made to the system. Why would we give blanket authorization to any patches the vendor chooses to push?

It isn't because we trust the vendor's goals and intentions, given the vendor has already been caught pushing through user-hostile code (Genuine Windows Notification) as a "critical update".

And it isn't because we trust the quality of the code, given the patches are to fix defects in the same vendor's existing production code.

It's because the code fails so often that it has become impossible to keep up with all the repairs required to fix it. In other words, we trust the system because it is so untrustworthy that we can no longer evaluate its trustworthiness, and have to trust the untrustworthy vendor instead.

No comments: