On Saturday, January 24, 2004, I attended an all-day class on Trusted Computing hosted by the Freedom Technology Center in Mountain View, CA. The class was taught by the Electronic Frontier Foundation‘s Staff Technologist, Seth Schoen.
You could get the basic flavor of the class by reading Seth’s articles, Trusted Computing: Promise and Risk and Give TCPA an Owner Override. He is writing a book on Trusted Computing, and spent the day taking us through his outline in detail.
This was an excellent class because of Seth. He is able to explain extremely technical hardware specifications to someone who is not an electrical engineer or even a computer scientist. He also has a balanced view of Trusted Computing, and took the time to point out the possible benefits of this technology along with the potential abuses.
We began with a discussion of some of the basic problems of computer security. Example: Presently, it is difficult, if not impossible, to know with certainty whether your computer is doing what you think it is doing and only what you think it is doing. That is, if you’ve ever left your computer physically unattended on your desk, or if you’ve ever been on the internet or a network without a completely patched system, or even if the manufacturer of your computer installed your O/S for you, then for all you know you could right now be infected with a boot sector virus that starts prior to your O/S, takes control of key features of the O/S and systematically fools any anti-virus software (or other security tool) that subsequently runs. The problem is probably worse if you need to know with certainty that a remote computer you wish to communicate with has not been compromised.
We also discussed the security problem that Roman poet and satirist, Juvenal, noted as long ago as the first century A.D. “Who will watch the watchers?” Your anti-virus program and indeed, any other security tool, can be compromised just like a regular application can and then cheerily report that all is well. The basic upshot of this preliminary background was that current computer security poses some fairly intractable problems.
Enter Trusted Computing. The amazing thing about these chips is that, if implemented as planned, I think they would actually do something to solve some of these very hard security problems discussed above.
The next four hours or so were spent detailing the four different initiatives out there that fall under the heading of Trusted computing. They are TCPA (now known as TCG), Intel’s LaGrande, AMD’s SEM, and Microsoft’s Palladium (now known as NGSCB). This was probably the most valuable part of the day, because understanding how this stuff works and why one might be motivated to design it in this way is necessary in order to begin to think of alternative methods of design that might achieve similar ends with less potential for abuse or to discuss it intelligently at all.
We spent a lot of time looking at the four main features of trusted computing which are:
- Sealed Storage
- Secure I/O
- Memory Curtaining
One key thing that I do not think is widely known is the extent to which all of this hardware is walled off from the rest of the machine. It will be touted as an “opt-in” system, so that if you do not want to use the trusted computing chip (the TPM), you need not. You can continue running Linux, BSD, or OS X and nothing has changed. It’s true that the TPM could conceivably be running nefarious programs that report on you, but the design is such that these reports would be sent through the regular part of your computer where you maintain control. So, a firewall or other software on that side could detect any uninitiated actions of the TPM.
Of course, lots of things are “opt-in” in name, but in practice, given other considerations, you can be left with little real choice. This is a big problem I will save for later. The point is that the story is not as simple as many Slashdot posters frame it. It’s not “Microsoft wants to crush Linux and so they are going to force a chip down consumer’s throats that will make it impossible to install a non-MS O/S.” In fact, the only TPM that you can buy right now comes in an IBM laptop that runs Linux! You can read a fairly technical article about this.
But there is the potential for abuse. Since that’s what everyone wants to hear about, here’s the scoop on that. This architecture makes problems that we have now, which can be worked around (sometimes only through extreme measures by super-geeks) truly insurmountable.
- Software Lock-in
- Software tethering to a single computer
- Prevention of Software Inter-operability
- Forced DRM restrictions
- Forced Upgrades/Downgrades
- Total Elimination of Software Reverse Engineering
- Truly Undetectable Spyware/Adware
- Hardware Lock-in
But what I really learned is that these potential abuses are not really the problem. This stuff is coming and I don’t think we’re going to stop it. The real problems are 1) Microsoft’s 90%+ market domination and 2) Consumer Apathy. Because the potential abuses mentioned above only truly become frightening when combined with these additional realities. When so many people use a Microsoft OS and when so many people do not care about or understand most of the potential abuses listed above, then we get a far more greater likelihood that these potential abuses will become real abuses.
I think our best defense temporarily is that IBM and Sun are members of the TCG, and given their interests in operating systems other than Windows, they are not going to do something that would allow for in principle or in practice O/S lock-in. The fact that so many internet servers run on other O/Ss also make it difficult to imagine that non-MS O/Ss could be kicked off the internet, for instance. (This could happen if your ISP’s router had a TPM chip and a policy requiring all connecting computers to prove they were running the latest Windows OS with all patches applied.)
Personally, I think the fact that such an architecture makes reverse engineering of software in principle impossible is enough reason to scrap the whole thing. I doubt most people care so much about reverse engineering though. The only avenue I see for motivating wide-spread consumer concern is to hammer on the very real possibility of undetectable spyware. Sadly, many people don’t even care about their privacy, so this may not work either.
When our audience is the industry and not consumers, then Seth’s proposal of an owner override to attestation becomes a pretty great idea. It defeats some of the benefits of the architecture, but also prevents some of the abuses.
Overall, I think framing the question this way might be best: Do we want to continue to have computers over which the individual has total control or do we want to have computers where we give up part of our control to the hardware itself/a third party? The thing about total individual control is that individuals are sometimes up to no good or are too lazy/uninformed to keep their systems secure and so some harm comes from giving them total control over their computers. But, the best argument here might be: That’s OK. We simply prefer to live in a world where we control our computers. Even if ceding some of that control brought us better security in some instances, we might simply say: So what?