Technical Insight-Web Application Security

Speaking of Government Backdoors

After Alex Stamos’ stand off with Admiral Mike Rogers, I got to thinking about what the Admiral must be saying when he insisted that government “front doors” were technically possible to create in a way that didn’t give them ultimate access. Then a story came out about a split-key approach that is being studied. Let me explain to you why that is a bad idea and propose a technically less dangerous one.

Barring any conversations about the ethics, the legal conundrums, the loss of trust, the weakening of freedoms, the chilling effect, or the future where we have to provide similar access to any government that asks, there are some legitimate reasons this design is bad. First a brief primer on how split-keys work.

Let’s take a simple encryption algorithm that just uses the password “Will Wheaton” to decrypt the plaintext. Now let’s say government agency A (the FBI/NSA or some similar organization) has access to the first half of the password “Will”. “Will Wheaton” is a very weak password, but it’s made significantly weaker when one party knows at least half of the secret. But it gets worse. Let’s say government agency B (the FISA court) has the second half of the password “Wheaton”. Eventually they need to combine the password somewhere. That physical place is a place where both halves of the password have to be typed in at the same time. Let’s call it a SCIF for argument’s sake.

In this example the SCIF is now the one place where all secrets go, and makes it a prime target to attack. Now both parties can see the data, instead of it just being one party. There may be situations where truly only one party should see the data. If the password is always the same for every piece of encrypted information for all conversations, it practically guarantees abuse once both halves are known. Not only is it significantly easier to break the original encryption since both parties have half the key material, but it has also created a single place where two parties now have to combine their two halves and it is far more likely to be abused.

What happens when access to that user’s data is no longer deemed useful? Does the key no longer become useful? What if they find out they were mistaken and the data they were looking at is benign? Is there a way to disable their password? No – that’s not how passwords or keys work when they have to work everywhere all the time. All they can do is tell Apple, or Google or whoever created the backdoor to change the user’s keys and/or create a different backdoor password to be created. That’s one of the major drawbacks of this model. It could also inadvertently tip off the suspect in the process if they notice a new key being issued as well, depending on how it was implemented.

Now let’s take a slightly different scenario where Apple/Google had a rolling window where passwords changed every day, say. One day it was “Will Wheaton” the next it was “Darth Vader” and so on. That way the FBI/NSA and the FISA courts could subpoena any piece of information but it had to be marked with a certain time period (say ten days and they would use their corresponding 10 keys split into two parts each for a grand total of 20 key-halves). That way, they only had access to certain pieces of information and only for that one conversation, and nothing after that time period. That has a better chance of being successful, but still relies on the parties to come together at some point and allows them both to see the resultant classified material.

A more useful approach would be to have four sets of keys for each time-slice of one day. Key 1 and 2 belonged to the FBI/NSA and Key 3 and 4 belonged to the FISA court. Key 1 would decrypt to a blob of further encrypted material that would only be decrypted fully by Key 3 (think of Key 1 as the outer slice of an onion and Key 3 the inner slice to get to the center). Also Key 4 would decrypt a blob of decrypted material that could only be fully decrypted by key 2. That way you could guarantee that neither individual key could be subverted to fully decrypt without the other’s involvement. It would also allow either or both to see the resultant material should they need it but not without each other’s approval. It would also guarantee that the key material wouldn’t be abused beyond the time slice for the conversation in question.

So here is how it would break down. FBI/NSA ask FISA for approval to decrypt User A’s conversation with User B. FISA agrees, and FBI/NSA request Apple/Google give them the time slice of Tuesday and Wednesday. Apple/Google respond with corresponding numbers 1234 and 1235 with corresponding blobs of encrypted text (if the FBI/NSA don’t already have it).. FBI/NSA request that FISA decrypt the blobs with Key 4(Tuesday) and Key 4(Wednesday) corresponding with conversation 1234 and 1235. FISA returns two encrypted blobs that won’t be useful until the FBI/NSA use their Key 2(Tuesday) and Key 2(Wednesday) corresponding with the time slice Tuesday and Wednesday for conversation 1234 and 1235. The FBI/NSA decrypt the final encrypted blobs and are able to read the conversation. At this point Apple/Google know nothing about the data, only that it was subpoena’d. The FISA court was aware of and complicit in the decryption but never saw the data, and the FBI/NSA got only the data they requested and nothing more. If the Court also needs to see the data, corresponding keys 1 and 3 are used for the same time-slice against the corresponding blobs of data.

Of course this is a huge burden, because now each user has four keys that need to be created for each day. Assuming there are 3 billion people in the world, and they use probably 3 different types of chatting systems per day, that would require something like 36 billion keys to be shared by two government agencies (18 billion each) per day. That’s a lot. Not to mention that keys wouldn’t just be short passwords, but presumably something like x509 or GPG certs, which can be quite large. And that also assumes that they can somehow get access to those keys in a way that the other (or malicious 3rd parties) can’t intercept or see. The devil lies deep in those details.

Ultimately though, I think Alex Stamos is right to press the government. Our industry thrives on trust, and if people believe that the government is spying on them, they are significantly less likely to transact or act normally – as themselves. Even if we can solve for the technical problems we have to be extremely thoughtful on how or even if we deploy it at all. Even when one’s only crime is one of thought or ideas, this kind of system dramatically increases the likelihood that the idea of freedom of expression will be lost in the annals of time. We all have to decide: would we rather have security in the form of big brother, or would we rather have privacy? We can’t have both, so we had better make up our minds now before the decisions are made on our behalf.

Please check out a similar and wonderfully written post by Matthew Green as well.