Or: Trust in Centralized Authority-Based Systems.
Just recently, Comodo (the Certificate Authority [CA] and creators of the popular Comodo Internet Security software) issued nine fraudulent certificates to as of yet unidentified hackers. The people responsible were traced to Iran, and the certificates were for mail.google.com, www.google.com, login.yahoo.com, login.skype.com, addons.mozilla.org, login.live.com, and "global trustee". That last one makes no sense, granted, but the rest of them are significant sites with obvious potential for abuse, especially from someone based in Iran. With those certificates, those responsible could falsely claim to be those sites, and web browsers would gladly agree. And this isn't even the first time a false certificate was given by a legitimate CA. Now, Comodo has revoked those certificates, but that's pretty much useless. Mozilla and Google have quietly patched their browsers to blacklist the bad certs (which brings up other issues entirely), but that's really a band-aid and doesn't solve the core problem.
The core problem is trust. The way the system is currently set up, our computers trust those certificate authorities absolutely and mindlessly. This brings that trust into serious question. And this isn't the only questionable trust-based system that we depend on. DNS has had issues in the past, with special mention to ICE's recent "Operation: In Our Sites" and its legally questionable actions. The issues can even be extrapolated as far as the US's nuclear launch system.
So the first question is: should we trust them? The short answer is no. The long answer is "not as we do now". I'll get more into that later. The second question is then: what do we do about it? That's really the hard one.
There's a lot of talk these days about distributed systems, with a variety of levels of success. PGP and similar software use the concept of a "web of trust" instead of the absolute authority of CA's. This works fairly well for what it is, but has issues with scale. On the DNS front, there has been a lot of talk about a "decentralized DNS" based off of the peer to peer model because of ICE's actions. It's an interesting idea, but there's nothing really working yet.
So distributed models exist, and at least on some level work. But are they the way to go? I'm not sure. I think there's a lot of merit to the centralized authority systems, when they work. There's a lot of value in a trustworthy authority checking the veracity of a group's identity, distributing accepted names, and providing other services. It's efficient, and the authorities can put a lot of effort into ensuring correctness. The problem is that they're not perfect, and break down more often than we'd like. Some of them are built well, some of them have fundamental flaws. But either way, it's still people making the decisions and providing the authority, and people are susceptible to all sorts of failures. Honest mistakes, laziness, and corruption can all break what would otherwise be a good system. So how do we protect ourselves from those kinds of failures?
Well, my inner engineer's first answer is "redundancy". In the case of certificates, requiring one to be signed by multiple CA's would go a long way towards ensuring veracity. It may not be hard to fool one CA, but fooling three or four simultaneously about the same bit of information would be significantly harder (I can't really verify that because I don't know their specific methodology, but it should be true). An ISP can really only alter their own DNS records, so checking multiple root DNS servers could act as a sort of "sanity test" for the records. It wouldn't do much if an man-in-the-middle knew which servers you were going to check, but hopefully DNSSEC will be pushed out sometime this century.
The big problem that redundancy doesn't solve, however, are cases where the authorities are just wrong. "In Our Sites" changes pretty much all the "authoritative" DNS records for questionable reasons. Sometimes you want to trust (or not trust) groups regardless of what the CA's say. I think this is really where decentralized systems fit in, as a supplement and extension of the authoritative systems, not a full replacement. Falling back to such systems after testing the authoritative ones (or double-checking with decentralized) could go a long way towards weakening the power of the authorities without necessarily weakening their usefulness.
In other words, limit the trust. It would be useful, long-term, to be able to quantify an authority's trustworthiness based on a decentralized system. If ICANN repeatedly bowed to the ICE's request, then it would be good to know that they're not as trustworthy as we'd like, and it's more important to search other sources. If a CA's been issuing a bunch of fraudulent certificates, then certs from that authority should be taken with a grain of salt, and ideally double-checked.
I'm not sure where exactly the balance point is, nor how to implement the fuzzy logic that this would require. It's an interesting problem, and one that's becoming more and more relevant as the centralized systems continue to abuse or misuse their authority. It's important to remember that no one is absolutely trustworthy, and I think it's about time we taught our computers to understand that.
*title blatantly ripped from Bruce Schneier's blog
No comments:
Post a Comment