Newswise — The FBI is trying to force Apple to write new software code to enable it to break into the iPhone used by one of the attackers who killed 14 people in San Bernardino, Calif., in December 2015. The FBI claims the data on the phone are essential to its investigation—and to national security—and on February 16, 2016, a federal judge ordered Apple to comply with the FBI’s request.

On February 25, Apple filed a court brief asking the judge to withdraw the order, claiming that it violates Apple’s First and Fifth Amendment rights and oversteps the federal All Writs Act, and that complying with it would “inflict significant harm—to civil liberties, society, and national security.”

At a Congressional hearing on the matter on March 1, FBI director James B. Comey, Jr., acknowledged that the agency had lost a chance to recover data from the phone when it mistakenly reset the password for the shooter’s iCloud service—and got locked out of the phone.

The standoff between Apple and the FBI is the latest chapter in the escalating battle between technology companies that are encrypting data to protect customers’ privacy and security and the US government, which says it needs the ability to access encrypted data to keep America safe.

BU Today spoke to Sharon Goldberg, a College of Arts & Sciences associate professor of computer science and a faculty fellow at the Rafik B. Hariri Institute for Computing and Computational Science & Engineering, about how the case relates to security and the growing debate over encryption.

BU Today: The FBI says that access to the data on the shooter’s iPhone is essential to its investigation. As a network security and data expert, what do you think?

Goldberg: Well, the headlines say, “Apple refuses to unlock terrorist phone.” That sounds really bad if you assume that the information needed for this investigation is only available on the shooter’s phone and can’t be obtained in any other way.

But it’s important to remember that the information we see on our phones is not just stored on our phones. For example, every phone call we make—who we call, how long we spoke, how often we called them, at what time we called them—is stored by the phone company. You can see this information in your phone bill every month. Your phone also has a GPS, and some cell phone providers use it to very accurately track your location.

Your Gmail is stored on Google’s servers. Yahoo mail is on Yahoo’s servers. Facebook messages are stored on Facebook’s servers. None of that information is encrypted—Google, Yahoo, and Facebook can and do share that information in response to a search warrant.

Even iMessage, the iPhone text messaging application that encrypts the individual messages sent from one person to another, reveals “metadata”—who is talking to who, when are they talking, and how frequently. All this information is incredibly revealing.

What about the FBI’s argument that with technology companies’ use of encryption for iPhones, other devices, and internet connectivity, law enforcement can’t get information needed for surveillance and security—that we’re “going dark”?

The Berkman Center for Internet & Society at Harvard recently issued a report analyzing this issue. The report found that we’re at the opposite of “going dark.” With people spending more and more time on the internet, the opportunities for collecting information about people are increasing all the time. This increasing use of encryption is a natural response to the unprecedented amount of information we are putting online every day. Encryption helps prevent malicious actors from getting our banking information, stealing our identity, gaining access to our health records, reading our tax returns, and eavesdropping on our communications.

The internet is so, so insecure. If you feel you’re “going dark,” that you can’t do surveillance with the vast amount of information out there, and the vast number of security vulnerabilities out there, then you’re doing something wrong.So is this case about Apple being asked to break the encryption on the shooter’s phone?

Not exactly. What Apple is being asked to do is engineer a new vulnerability into the iPhone. The FBI wants Apple to write a new piece of software that defeats the security features of the iPhone. But the existence of this piece of software would create a large number of security risks for Apple and its users.

What kind of security risks?

The way the iPhone works right now is that to unlock the phone, you have to manually enter a four- or six-digit passcode. If you enter the passcode incorrectly some number of times—you get 10 guesses—then you’re locked out of the phone; you can’t ever get in.

The FBI wants Apple to write new software that will allow them to very quickly input as many password guesses as they want. You can correctly guess a four-digit passcode in just 10,000 guesses—that’s how many combinations of digits there are. Since 10,000 is a tiny number for a computer, given this new software, the FBI should certainly be able to unlock the shooter’s phone.

The FBI says this software will only need to run on the specific phone the court order is referring to. Probably this can be done by writing the code so that it only runs on a phone with a specific hardware identification number—that of the shooter’s phone.

Finally, this new software will be loaded onto the shooter’s phone using the “software update” mechanism that iPhone users are so familiar with. However, for an iPhone to accept a software update, the software needs to be cryptographically signed by Apple. So Apple is being asked to write and cryptographically sign software that will allow the FBI to unlock the shooter’s phone.What’s a cryptographic signature?

Think of it like this: in ancient Egypt, the pharaoh would write a letter and then use his signet ring to stamp a wax seal on it. That meant that everything enclosed inside the wax seal—everything inside that letter—was written by the pharaoh, and only by him. That makes the pharaoh’s signet ring an incredibly valuable object. Anyone holding the signet ring could issue decrees in the name of the pharaoh.

A cryptographic signature is sort of the digital equivalent. The software being signed is like the decree from the pharaoh, the cryptographic signature on the software is like the wax seal, and the cryptographic signing key is like the pharaoh’s signet ring.

By cryptographically signing the software, Apple is certifying that the software is written by Apple and only by Apple. Even changing a single line in the code would stop the signature from validating.

Shouldn’t these steps—checking the specific phone’s hardware ID number and that the code is cryptographically signed by Apple—ensure the code can’t be used on any other phone?

Ideally, yes. But in reality, things could be very different.

For one thing, an attacker might be able to tamper with the hardware identification number on an innocent user’s iPhone, changing it so that it matches that of the shooter’s phone. This hardware attack would allow the code written for the shooter’s phone to run on another user’s iPhone.

For another, there could be a bug that allows the code to run even if the signature does not validate. Someone can use a bug like this to change the code so that it runs on another phone. This might sound far-fetched, but this exact bug has happened in code written by Apple. It’s called the “goto fail” bug and it broke the cryptographic code used by iPhones for network communications.

There could also be a weakness in the algorithm used to create the cryptographic signature. Then the signature would validate even if the code were changed. Again, someone could exploit this to change the code so that it runs on another phone. This exact bug has also occurred in the past. Attackers were able to load Flame malware as a signed Microsoft Windows update—just like what the FBI is asking Apple to do—because they broke the security of the algorithm used to compute cryptographic signatures on Windows software updates.

I could go on.

The point is that security engineering is really hard. Modern systems are complicated. People make mistakes. This is the whole reason we have software updates in the first place—to fix these mistakes. So by signing a new piece of software that defeats the iPhone’s passcode security, Apple is creating a whole new set of vulnerabilities that attackers could exploit to attack innocent users.

But wouldn’t this code be seen and used only by Apple and the FBI? How could attackers get their hands on it?

There’s a really nice blog post by Jonathan Zdziarski, an expert in forensics, who says that if this new software Apple is being asked to write is a tool that’s going to be used in forensics, for it to be admissible in court, it must be validated by many independent parties and provided to the defense. And as we know by now, the more people who have access to sensitive information, the more likely it is to be breached.

Would there be additional vulnerabilities introduced, especially each time Apple might be forced to comply with such a court order in the future?

Apple would probably need to write and sign a device-specific piece of code in response to every court order.

Apart from giving increasing numbers of people access to this sensitive code, it also means that Apple’s code-signing key would have to be used frequently. That key is extremely valuable. It’s like the pharaoh’s signet ring. If the signing key is stolen or compromised, an attacker would be able to load malicious software onto any Apple phone. The attacker could then make the phone do anything it wanted: turn on the microphone and eavesdrop on conversations, track the user’s movements with its GPS, activate the camera and secretly film the user, and so on.

That’s why Apple likely has extremely robust processes to protect its signing key from theft. The few times a year that software updates are added to iOS, the software is signed through what is likely a very slow and painstaking process that involves a very small number of authorized people at Apple.

Now imagine the signing key was used multiple times a month to sign device-specific code for forensic purposes, like the FBI is asking for here. This process would need to be run more frequently and involve more people, and so it’s more likely to be attacked. This is a massive attack surface for perhaps the most valuable piece of cryptographic material that Apple has—its code-signing key.

Many people say it’s worth some security risks so the FBI can get information that could help catch terrorists and other criminals. What do you think about the tradeoffs?

I think that our efforts to secure the internet are nowhere near where they need to be, and this would be a step backward at a time when we need to improve cybersecurity, not weaken it. This would weaken everyone’s security in hopes of making it easier to trap a few bad actors. But if I were a bad actor, why would I use a product that I know has extra features written in to help law enforcement? I would just go buy some other product, made elsewhere in the world, that I know the FBI can’t hack into.

MEDIA CONTACT
Register for reporter access to contact details