A New Threat to Generativity

The symposium is over, but when I saw an important news item on a major threat to generativity, Danielle graciously urged me to post one last message to this blog.

A big player — one of the very biggest, Intel — has embarked on a new strategy, including a major corporate acquisition, that poses major threats to generativity.  Specifically, according to a news report on Ars Technica, Intel is planning to add hardware support for “known good only” execution.  That is, instead of today’s model of anti-virus software, which relies on a database of known-bad patterns, Intel wants to move to a hardware model where only software from known-good sources will be trusted.  For a number of reasons, including the fact that it won’t work very well, this could be a very dangerous development.  More below the fold.

When only known-good sources can produce software that people can run, the first question to ask is who selects those sources.  I personally doubt that Intel itself will do it directly — they’ve had enough antitrust problems with the FTC and the EU without the accusation that they now control the entire software market — but I’ll let those with more expertise in antitrust law discuss that.  The next obvious answer is the operating system vendors: Microsoft, Apple, all the myriad Linux vendors, etc.  Some organizations may wish to install their own; I discuss this below.

Let me back up and explain exactly what appears to be going on.  (I say “appears to be” because we don’t have technical details yet.)  One or more privileged entities, known generically as trust anchors and referred to more specifically as certificate authorities, issue cryptographic certificates to as many parties as they wish.  These certificate owners in turn can digitally sign code — i.e., make the cryptographically-verifiable assertion that only they could have produced that code — or (if permitted by their own certificates) issue subcertificates to other parties, ad infinitum.  I’ll give a concrete example.  Suppose that there is one ultimate trust anchor, Intel.  Intel issues a digitally-signed code-signing certificate to Microsoft, which in turn issues a certificate to Xyzzy Corp.  Xyzzy in turn issues certificates to its browser division and to its hardware device driver division.  When you, on your desktop, try to run their browser, it goes through a recursive validation process.  First, it checks if the browser is properly signed by some certificate; if the code has been tampered with, that check will fail.  If it succeeds, the system checks the validity of the signature on the certificate.  That certificate was issued by Xyzzy.  Its certificate was signed by Microsoft, which in turn has a certificate signed by Intel.  And how does the PC know that that certificate is valid?  It’s embedded in the hardware, courtesy of Intel’s new strategy.

If everyone is playing honestly, there’s some protection here; no code not authorized by this chain leading up to Intel can run.  If anyone tampers with a legitimate program or certificate, the digital signature will fail; similarly, it is believed to be impossible to forge a signature without breaking the cryptography or stealing someone’s key.  But it all depends on who the trust anchors are.  If they’re too tightly controlled, there is too much central control; if there are too many, who’s to stop EvilHackerDudez.com from getting a code-signing certificate?

A crucial question, then, is how many trust anchors will exist.  Verifying a signature, whether of a program or of a certificate, is an expensive operation, though hardware assists can help tremendously.  It’s possible to cache signature verifications, so that if you’ve recently verified the certificate of the Xyzzy browser division all the way to the trust anchor you don’t have to do so again.  All of that works better, though, if there’s a reasonably small number of certificates (or programs) to verify.  Perhaps, depending on Intel’s design decisions, there would be room for only a very few trust anchors.

It’s instructive to look at how browsers handle the same problem.  Every mainstream browser ships with a built-in set of trust anchors; for Firefox, there are about 180, selected by Mozilla.  These are used to authenticate secure web sites, web sites to which you set up encrypted connections.  Trying to override this list is painful, difficult, and accompanied by blood-thirsty warning messages.  It is fair to say that most consumers and small businesses will never change this list.  Large companies may add their own, either for code developed in-house or for trusted vendors.  Conversely, they may delete trust anchors, to prevent unauthorized code from running on corporate machines.  This is a dream of many IT managers, but would likely impede corporate agility; most interesting new software developments, including the web itself, were pushed from the bottom up.

Suppose that we get the same set of about 180 trust anchors.  What does this mean?

We probably won’t get much real security.  Matt Blaze observed a long time ago that “commercial certificate authorities protect you from anyone from whom they are unwilling to take money“.  If they don’t vet their clientele for anything other than their corporate names, there’s no protection; EvilHackerDudez can easily create a subsidiary named Advanced Integrated Software Security Research Corporation and let it get the certificate.  In the web model, any trust anchor can issue a fake certificate with any given corporate name.  This has become an issue in the Web world, since Firefox now includes a Chinese company on its trust anchor list; this company, perhaps at the behest of the Chinese government, could do things like issue fake Gmail certificates to help capture dissidents’ email passwords.

Will governments get their own code-signing certificates?  Which ones?  Years ago, there was the accusation — valid, in my opinion — that Microsoft added a certificate at the behest of the NSA.  Which governments do you trust?

There’s another danger: cryptographic code-signing keys can be stolen.  This has been happening; in at least two recent cases, the Stuxnet worm and a very recent exploit against Adobe’s PDF viewer, the perpetrators were able to sign their malware to make it appear that it came from very legitimate sources.  I should add that both of these attacks were extremely sophisticated and dangerous.

To put this all in legal terms, signed code no more protects against malware than a signed contract guarantees certain performance.  At most, both provide accountability.  In event of malware or non-performance, you can seek recourse — if you can afford it, and if they can pay up, and if the signature on the code or contract wasn’t forged in the first place.  But here, we have a considerable downside: our computers will only execute code from someone on the approved list.  This will likely pose a considerable hurdle to legitimate but small firms, and will certainly inhibit experimentation.

There’s one more potential danger I want to point out.  The exact format of an executable file is a complex matter and strongly tied to the particular operating system it runs on.  The more the signature verification hardware knows about the format, the better job it can do of blocking malware.  (A detailed explanation of why this is so is highly technical, and well beyond the scope of even this post.)  But this may mean that Intel will favor its biggest partners, Microsoft and Apple.  Will this act to discourage new OS vendors that have a very different model of what an executable file looks like?

To sum up: this new scheme will provide minimal protection, but will deter innovation and generativity.  Worse yet, the degree of protection provided is inversely proportional to the damage done.

You may also like...

8 Responses

  1. Abed BenBrahim says:

    I fail to see how this will provide any security, real or otherwise. Everyone who develops software (IT departments, shareware authors, consultants or anyone who claims to do so) will be obtain a code signing certificate, as is the case today. The fact that a program is signed by “Acme Software Solutions” or “John Smith Software” will not inform me in any way whether the “cool” screensaver I downloaded is a trojan, in the same way that Windows asking me “do I want to allow a program I use every day if I want to allow it to modify the system ” increases the security of my system in any way.

  2. A.J. Sutter says:

    Sounds like the revenge of “financial innovation” — the rating agency paradigm is backwashing into tech. There have been some less than successful attempts to analogize IP and finance recently; but this seems like the killer app of ironic symmetry.

  3. Steven Bellovin says:

    Abed: it can provide real security but only under certain conditions. If a corporation configured its machines to only run software from, say, Microsoft, Adobe, and their favorite hardware vendor, by installing just those three trust anchors, code from John Smith Software won’t install. Of course, that assumes that the list is short enough to be manageable (it probably won’t be) and that code from trusted vendors is safe (which it isn’t — Adobe’s software has been a lot of very bad press of late, to name just one among very many). But a general model — you’re right; no chance. See http://www.mail-archive.com/cryptography@metzdowd.com/msg11801.html for a very recent technical rant on why the very concept of these certificate chains — known as PKI, or public key infrastructure — doesn’t work very well.

  4. Andy Steingruebl says:

    Steve,

    One feature these schemes do provide in terms of accountability is revocation.

    In the case of many of the exploits that target vulnerable software, many of these do require subsequently running their own executables. Depending on the nature of the attack, code signing does offer a significant hurdle to some forms of malware because many of its side-effects on the system cause either the new binary to fail to verify, or their new binary installed cannot execute.

    Also, whether they are perfect or not, they represent one control in the ecosystem. Perfect security – no. Nothing is though, so critizing this for not preventing all attacks isn’t really fair.

  5. Steve Tate says:

    Steve,

    Maybe I’m missing something, and the linked news story is pretty much worthless, but what exactly is the big deal here? How is this different from Microsoft’s Authenticode? Since the OS is responsible for anything that loads, this seems like an OS/software issue, not a hardware issue (unlike trusted computing group style measurements, which really do require hardware support).

    Is there an implication that blocks of binary code will have to be authenticated and verified at the hardware level? Seems like a bit of an overreach to me, but here I’m doing just what annoyed me about the Ars Technica article: making wild speculation with no basis in actual facts.

  6. Steven Bellovin says:

    Andy: thanks for the comments.

    It’s unclear that revocation actually works properly; in http://www.mail-archive.com/cryptography@metzdowd.com/msg11324.html, Peter Gutmann discusses some of the philosophical issues. Among them is the fact that the compromised key was used to sign a lot of legitimate, crucial pieces of code; revoking it would “brick” a lot of systems that weren’t at risk from Stuxnet. This seems hard to fix, even in principle; while one could have, say, a separate key per legitimate product, we’re dealing here with a stolen key, one that was used for some legitimate purpose as well as for the worm. The author notes ‘So alongside “too big to fail” we now have “too widely-used to revoke”.’

    You are certainly correct that in some cases, signed code can raise the bar. I alluded to the Stuxnet worm; a recent news story (http://www.computerworld.com/s/article/9185919/Is_Stuxnet_the_best_malware_ever_?taxonomyId=17) quotes researchers as speculating that a nation state was behind it. It is the most sophisticated attack ever found, according to the story, by contrast, the attack that Google linked to the Chinese government was “child’s play”. It is not the standard against which we should judge other attacks. The new attack against Adobe seems less sophisticated, though.

    I use two metrics when evaluating a proposed security solution: how does the cost of the mechanism compare to the harm it prevents, and how does the cost of the mechanism compare to what it will cost the attackers to counter it? Here, the cost in terms of lost generativity is, I think, quite high. And countering it? That depends on the details, and in particular just how the set of acceptable certificates is defined. If it’s like the web security model, it fails trivially, as Abed points out above. More restricted lists? The question then turns on how easy it is to steal keys. Gutmann points out that there are many pieces of malware already in existence that steal keys. So — are we fighting the last war? (I think we are, but that’s the subject of an entirely different post — article, more likely — that currently exists only as a slide deck on my web site.)

  7. Steven Bellovin says:

    Steve: Yes, I think it was talking about hardware verification of executables. Apart from the fact that Intel is a chip company, not a software company, the article spoke of making changes to the “x86 ISA”. “x86” is geek-speak for the series of Intel chips that started with the original IBM PC (the 80286) through the Pentium (80586) and later variants. “ISA” is “instruction set architecture”; when the article says “Otellini went on to briefly describe the shift in a way that sounded innocuous enough–current A/V efforts focus on building up a library of known threats against which they protect a user, but Intel would love to move to a world where only code from known and trusted parties runs on x86 systems”, to me it means changing the chips to enforce it.

    You’re certainly correct that one can do this just with software. That is, after all, what Apple does for the iPhone. But hardware restrictions are more difficult to evade, which is presumably Intel’s goal.

  8. Bill Cheswick says:

    Your arguments against the signed code are not convincing: the fact that we don’t
    edit down the trust entries in Firefox doesn’t mean we couldn’t or shouldn’t. In fact,
    I would like to see certificate usage information and reporting (perhaps there
    is already a plugin) to help those that care edit the list down.

    A shorter list should be a clear goal to those running a corporate intranet. Also,
    the weekend sys admins taking care of grandma ought to be able to find
    and install recommended trust lists. She doesn’t need generativity.

    I agree that there are problems with implementation (I support static
    binaries, which should help), revocation, and lost keys.

    Your downsides are a concern, but you don’t mention the other side: virus
    protection is a mugs game, eventually doomed to failure in theory, and
    already failing in practice.