Advertisement

Signal’s Meredith Whittaker: Breaking encryption while preserving privacy is ‘magical thinking’

The Signal president spoke with CyberScoop about AI, encryption and the growing threat to privacy.
Signal messaging application President Meredith Whittaker poses for a photograph before an interview at the Europe's largest tech conference, the Web Summit, in Lisbon on November 4, 2022. (Photo by PATRICIA DE MELO MOREIRA / AFP) (Photo by PATRICIA DE MELO MOREIRA/AFP via Getty Images)

With tens of millions of users around the globe, Signal is one of the leading messaging apps offering encryption to protect messages from snoops. As a result, it’s also become a target of lawmakers seeking to undermine the technology.

Tasked with overseeing a staff of a little over 40 at the small nonprofit that keeps the app running, Signal President Meredith Whittaker is the company’s voice when it comes to fighting against policies that threaten users’ privacy. And lately, there’s been no shortage of risks. Recently, Signal joined critics including Meta to call for changes to the U.K. Online Safety Bill, which is expected to soon reach a final vote. The bill would introduce client-side scanning, a technology that scans people’s private communications on their devices to match against a database of content considered objectionable.

A prominent artificial intelligence researcher who formerly worked at Google before helping co-found the AI Now research institute, Whittaker is also a leading voice in warning about the technology’s potential harms to civil liberties. CyberScoop spoke with Whittaker about global threats to encryption and how AI may be fueling “magical thinking” about how governments are trying to evade encrypted technologies.

This conversation has been edited for clarity and length.

Advertisement

Some people say we’re entering another stage of the encryption wars. How would you describe the moment we’re in?

The threat is very real and very immediate. In my time, I have not seen a greater threat. And I think it is necessary that we push back and clarify the terms. Now, I don’t think if we’re going to frame this as a war, I don’t think it’s a war we’re ever going to definitively win. Because what we’re dealing with is not a misunderstanding of how technology works, at least it is not based on a misunderstanding. We’re not going to convince those in power that they should give up their pursuit of information asymmetry as a tool of power, which is effectively what surveillance generates for those who surveil over those who are surveilled. The will to centralize power has come with the kind of will to manage, surveil, socially control populations, and I don’t think that kind of nucleus is going away. But right now I think we’re in a pretty significant moment. It’s really important that we continue to clarify the terms and continue to ultimately win on this, or we could be facing a scenario where the possibility for private digital communications is all but curtailed.

What do you mean when you say, “clarify the terms”?

There is a vein of magical thinking that is claiming that technologies like client-side scanning will somehow be able to surveil everyone’s communications on behalf of the government and some mixture of private entities determine whether those communications are considered acceptable or not, and take action on that determination. And somehow do this privately. I think we need to be very clear that there’s no way to implement a safe backdoor. Artificial intelligence — whatever you have been led to believe by the marketing of these companies — is not in fact capable of magic.

The claims that are being made about what is possible via this type of surveillance are in fact not grounded in reality. There is some myth-busting and some deflating of the hype that needs to happen. And then I think there’s a need to put this in a more grounded historical context that does recognize the stakes of creating a system that allows the government to effectively monitor everyone’s private communication all the time. The pretext for that monitoring may change. But that’s something we really need to emphasize — just how dangerous that kind of regime could be.  

Advertisement

We’re post-Dobbs now. We’ve already seen Jessica Burgess, who was a mother in Nebraska, who was charged with a felony for helping her daughter access reproductive care after the state suddenly banned reproductive care. And the evidence provided that led to that charge were Facebook messages. We have a sense of how this could be used in a world where people’s identities are criminalized, and where people’s access to information is criminalized. That really needs to be a much bigger part of the debate.

What do you think are the most imminent threats to encryption?

I’m certainly keeping a close eye on the encryption provisions in the U.K.’s Online Safety Bill. I should be clear that the bill itself is a kind of omnibus collection of a number of different provisions and some of them are good. I think researchers having access to tech company data that’s really positive and, we should not throw that out. However, there are some really troubling provisions in there that would give the U.K.’s telecom and competition regulator the ability to mandate government-approved scanning technology on everyone’s device that would implement a regime of mass surveillance, that would check people’s communications before they were sent against an opaque database of unacceptable speech using very likely some variant of artificial intelligence or machine models to sort of detect impermissible content and take action based on those detections. And that is absolutely unacceptable. And that would be a total evisceration of the right to privacy in addition to just standing up an extraordinarily expensive, unworkable regime.

You already mentioned client-side scanning has gained a lot of traction as a workaround for encryption. Why do you think it’s gained popularity?

The kind of moment we’re in where there’s so much ungrounded AI hype is contributing to this. You have tech executives coming on stage saying they believe these systems are conscious. We have many, many different companies and many, many different “fathers of AI” making claims about the capabilities of these systems that simply are not grounded in reality. And so this creates an atmosphere in which it’s not hard to understand why someone not familiar with the material details of these technologies might believe that, “Oh, I guess if AI can think better than a human being, why couldn’t client-side scanning accomplish the impossible by scanning content privately?” And I think there’s a suspension of disbelief that has happened because we’re inundated with baseless claims. And so why wouldn’t this other baseless claim also be true?

Advertisement

In the U.S., we’ve seen anti-encryption rhetoric, especially around child sexual exploitation materials (CSAM). Do you see things getting worse in the U.S.?

There’s certainly always been a will from some parts of law enforcement to break or undermine encryption. This isn’t new. I am tracking the potential resurgence of the EARN IT Act and the Kids Online Safety Act. I think the age verification bills are really concerning. To verify someone’s age, you have to have information about their identity, you have to be able to verify that identity and you have to effectively create a kind of surveillance database.

The focus on the U.K. is that it is the furthest along. And what we do know from tech policy is that precedent is extremely powerful. Precedent gets copied and pasted by governments around the world pretty quickly because regulating new technology is seen as difficult and risky.

Part of the encryption backlash is the idea that it facilitates CSAM. How do you respond to such an emotional, visceral argument against encryption?

I think you need to face it. It’s not like I can’t avoid it or say, “We’re not talking about that, we’re talking about math.” That’s not really addressing the issue. But it very quickly becomes a frame where it’s almost like, all child abuse is caused by online [activity.] So, the frame of the problem is suddenly technological. So that frame of the solution is, of course, technological, right? And all of this ignores the fact that there are children suffering in the real world, and they need help.

Advertisement

The majority of abuse happens in families and when it doesn’t happen in families it is largely perpetrated by an adult who is an authority figure tasked with caring for a child in some form. That’s not happening online. That’s happening in the real world.

There are dynamics here that are really, really grim, that we do need to look in the face if we’re going to address this. And I think in some ways, abstracting this online and making it a problem of the technology and of the “tech boogeyman” is a way of actually avoiding looking at those dynamics in the face.

Encrypted messaging services have banded together in opposition to the U.K. law. How do you navigate being in solidarity with a company like Meta, which has a very different business model than Signal?

Extraordinary threats call into being unusual coalitions. Solidarity is a very particular practice. I don’t know if this is solidarity. But in this moment, faced with these threats we share a common interest. At the same time, this doesn’t mean I’m any less critical of Meta’s surveillance practices. This doesn’t mean that I’m not going to call out WhatsApp for advertising themselves as truly private when they continue to collect metadata that could easily be joined with Facebook data. Where we are right now is there’s a real threat to the ability to communicate privately. And one of the scant handful of good things Meta has done is not ripped out the Signal protocol WhatsApp integrated right before it was sold to Meta.

If they’re fighting to ensure that the historical norm of privacy continues to be available for human conversations, even as many of our communications have moved online, then we’re fighting that battle, too. But that doesn’t imply anything else.

Latest Podcasts