How Signal’s Meredith Whittaker Remembers SignalGate: ‘No Fucking Way’

how-signal’s-meredith-whittaker-remembers-signalgate:-‘no-fucking-way’

In March of this year, Meredith Whittaker was at her kitchen table in Paris when Signal, the encrypted messaging service she runs, suddenly became an international headline. A colleague sent their group chat the story ricocheting across the globe: “The Trump Administration Accidentally Texted Me Its War Plans.”

Of course, you know the rest: In the piece, The Atlantic’s editor in chief, Jeffrey Goldberg, detailed how he’d been added to a Signal chat about an upcoming military operation in Yemen. Over the following days and weeks, the incident would become known as “SignalGate”—and created a legitimate risk that the fallout would cause people to question Signal’s security, instead of pointing their fingers at the profoundly dubious op-sec of senior-level Trump officials.

That never happened. In fact, Signal’s user numbers grew by leaps and bounds, both in the US and around the world. It’s growth that, Whittaker thinks, is coming at a time when “people are feeling in a much deeper, much more personal way why privacy might be important.”

On this week’s episode of The Big Interview, I talked to Whittaker, who also cofounded the AI Now Institute, about the aftermath of SignalGate, the trajectory of artificial intelligence, and the tech industry’s current relationship with politics.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Meredith Whittaker, welcome to The Big Interview.

MEREDITH WHITTAKER: Nice to see you, Katie.

Nice to see you, too. Brace yourself, we always start these conversations with a little warmup, so I’m going to ask you some very fast questions. Ready?

I am.

OK. Mountains or beach?

Mountains.

What’s the most over-hyped AI buzzword right now?

Agent.

I knew you were gonna say that. What’s the weirdest AI application you’ve ever seen?

A chatbot that pretends to be your friend.

That is weird.

Right?

Weirder every day. If Signal had a mascot, what would it be?

We would never tell you.

What emoji best sums up your philosophy on privacy?

The ghost emoji.

Nice. More secure: handwritten letters or encrypted texts?

Handwritten letters.

Coffee order: simple or complicated?

Simple.

She’s telling the truth. She’s drinking what looks like a very basic coffee right now. If you weren’t working in tech, what would you be doing? What’s your alternate career path?

A poet.

Love that. Someone asked me that once. I don’t wanna name-drop, but it was [New Yorker editor] David Remnick, in a job interview, and I said massage therapist. He was like, “What is wrong with you?”

He’s like, hired.

Yikes. Wow.

I can make that joke. I’m not in your industry.

It’s fine. I’m blushing. OK, so let’s talk a little bit about you so that I can stop talking about that very awkward interview I did with David Remnick.

Interestingly, we don’t know a lot about the early life of Meredith, which I realize is on purpose. You’ve talked about how you’ve decided to keep your personal life private. You decided that at a very early age. If only more people were so careful. Tell me about that decision.

I don’t think it was a conscious decision. It wasn’t a flex. It’s not like I woke up after reading a book on how to live your life and I decided this is who I am and I’m maintaining this firewall. It just seemed weird and creepy. My friends know me. My family knows me.

Yeah.

But I was a teenager when MySpace and Friendster—the proto social networks—were coming up. I came up in chat rooms and, like, Usenet groups, and it was about, like, Is there a persona you could create that’s like funny or weird or skewed? How good are your clap backs? How accurate is your information? Can you roast with the roasters? It had very little to do with an exposé of your personal life.

I think it’s strange that that has become part of normal culture, that everything about you is assumed to be fair game. It’s a question of what led you to keep your personal life and your relationships to yourself? And I’m like, what led us to assume every micro-action and relationship and social context is mineable and tractable and even understandable.

OK. Fair answer. We do know what we do know about you.

We, who’s we?

My producers and myself. We do know you went to UC Berkeley.

I did.

You studied rhetoric and English literature. What did you think you wanted to do?

I thought college was a much easier hustle than working retail and managing bands and scraping together money. I didn’t come from a place where you set down a five-year plan as a 12-year-old and get into school. It was hustling. It’s hard to work retail at first. [College] was just a much easier job, and I was good at school. I was always a big reader. Like that was never hard for me. But it wasn’t a career aspiration. I love reading books. This is the most fun I could have.

Then I graduated, and there were some professors who wanted to push me to grad school, and I was like, “That is not a good bet. I don’t wanna take out any loans.” Like I had this sort of class-based caginess around owing money, and then I started looking for jobs.

I put my résumé up on Monster, which is old LinkedIn, for the youth.

Oh yeah.

And a Google recruiter reached out.

I was about to go there. We’re not gonna spend a ton of time on Google, but you did work at Google. Was that a culture shock? I will say, knowing you now, it is hard to imagine.

It was odd, but the jump from where I came from to the lowest ranks of Google when I started is farther than any jump I could make from my sort of entry-level position at Google to anywhere. It was just a very, very different context, so I didn’t know how abnormal that place was. I was just like, I guess this is what a business job is like.

We sit a lot. You do not have a social life. You get up at 6:00, you take the shuttle from Berkeley to Mountain View, and then you get home around 9:00. But also it’s like a feral, teeming environment. At that point, when I started in 2006, it was a very different tech context. There was a huge amount of money, but Google had not entrenched itself as a dominant player.

Right, right.

These were sort of platforms trying to monetize attention in various ways, and there was a real belief at the core of that culture that finally the formula to being ethical billionaires—we found, we unlocked the good kind of capitalism and this was it. The world would be transformed. And I was like, whoa, this is interesting. The people working in tech along with me hadn’t gone into that field largely because it was a money field. They’d gone into it because they were like wooly little nerds who loved circuits.

Right.

There was a huge amount of tolerance for weirdos and nerds and strange, odd people, which I love. That’s my crew. In that way, it was a pretty fun environment because if you worked at Bell Labs, you were hired. If you were working on some core protocol, you were hired. So a lot of very smart people who were genuinely interested in what they did were roaming around with money and time and a lot of permission to do stuff.

The lucky part about coming where I came from is I didn’t realize those checks weren’t written to me. So I was walking around trying to cash them everywhere, right? I was like, “Well, I wanna do that.” And they were like, “Well, you know what? We didn’t hire you particularly to do that. You did not invent a core pro, but you’re in the kitchen, you’re cooking. No one likes conflict in this environment. So I guess we’ll let you proceed.”

One of the interesting things about your time at Google, to me, is that you watched the evolution of AI. You have actually been warning about the implications of AI for many, many years. Can you talk a little bit about those earlier years looking at the evolution of AI and looking out into the future, and seeing things that we are now experiencing? I mean, did you feel crazy? Were you looked at as crazy to be sounding those alarms early on?

Yeah. I mean, I’ve never felt crazy. The closest I think was like, “Oh, I must not be clear if this isn’t landing. How do I say this more clearly? Because obviously if people heard it, we would act right?” And that’s a more naive version of myself. But you know, I started 2006 at Google and then I cofounded an effort called M-Lab that was sort of large-scale network measurement, basically measuring your internet performance. Like why isn’t Comcast working? This was an effort to create data that could live in the public domain, that could allow us to understand net neutrality, which was a big topic back then. Also just to have a public source of information around how these core infrastructures, these telecommunication networks, are working.

In that process, I got really sensitized to the process of creating data, right? Like the millions of editorial choices that go into how are we designing a methodology that will create data that we can be comfortable saying is a proxy for a very complex reality? I was already backed into just how contingent and ultimately editorialized data is. It’s not a right, flat reflection of the facts of our universe. It’s a series of choices that people have made that reflect their positions, their interests, the job they have, right? Why was Google paying me to do this? Well, it had an interest in this data set existing, right?

They were paying $40 million a year just in bandwidth to help sustain this infrastructure around the world. Why would they pay that? Because that dataset was doing something for them in the world. And so that’s happening during the 2000s into the early 2010s. Machine learning was around, machine learning was one of a number of statistical techniques that usually lived between ads and research to help optimize advertising auctions and ad placement, whatever. But it wasn’t the Godhead, right?

Yeah, yeah.

It was like a thing some people did. Then in the early- to mid-2010s I started seeing machine learning courses pop up. It became a thing that would move from a little tech talk around a machine-learning innovation to the main stage of Google’s all-company weekly meeting.

There was this meeting I had at that time, I was managing a pretty big budget, but running the open research group, which was collaborating with a lot of academics and open source projects around big issues that span beyond Google. It was a lot of fun. I was funding a number of projects so people would kind of come in and pitch me.

There was this team from Harvard that came in wanting me to fund an effort to create a kind of machine-learning-based genocide detection algorithm.

Wow. OK.

You know, this is, I don’t know, 2015. Something around that. 2014. And I was like, “Wait, what? How do you define that?”

How do you [argue] that with data? If you look back at the petition that accompanied the UN adopting Lemkin’s definition of genocide post-World War II, there’s always been contestation around who gets to claim that term and who doesn’t get to claim that term from the very beginning.

I’m asking these questions and I’m getting no answers. It’s just sort of the math will solve it. We will build a model. We think we can get the data … And it’s like, “Well, census data across countries isn’t even fungible. It’s collected in different ways. It represents totally different methodologies. How would you do this?”

But that, to me, was this moment where I was like, OK, what are we actually doing here?

Yeah.

Because I’m seeing this sort of pop up around the company. Claiming to be able to predict, detect, and do things that I know the data they would be using is way shoddier than the M-Lab data. Like I know how obsessive we are about methodological rigor for measurements that are very low level, arguably some of the most “objective,” in quotes, measurements that you could do, and still they’re fuzzy and we’re constantly having to make editorial choices.

Now you’re saying you can model this with social data, with census data, with data that is in no way as rigorous or as easily objectifiable, to misuse that term? That was the catalyst for me beginning to get interested in that field, to read a lot and recognize that what was happening was some sleight-of-hand and had some pretty significant social risks.

So then fast-forward to the year 2025. I can’t get through half an hour in my job without hearing about artificial intelligence. It has taken over the world—not in practice, but optically. What is your assessment of the state of play?

One, AI: It’s not a technical term of art, and that’s very convenient for this market, right?

It was a term invented in 1956 by John McCarthy, who is a cognitive computer scientist, and he wanted grant money. He wanted to be the father of his own field, right? So that term was invented, but it was actually not referring to the neural network paradigm we’re now using. It was referring to these sort of symbolic systems, which at that point were opposed to the neural network approach.

The connectivist approach, which predated the term artificial intelligence by over a decade. So neural networks, it was like ’43. You had the term “artificial intelligence” invented around ’56. It didn’t apply at that point to neural networks. It’s kind of this blanket that has been thrown over a rotating set of technical approaches that, in this current moment, refers to these very, very, very, very, very large-scale models that require huge amounts of compute, and huge amounts of data, and certainly do really, really impressive things like trick me into thinking they love me or whatever.

Right. Right.

This blanket could arguably be thrown over any other paradigm, right? If we decide that AI is sort of describing something else. That’s a long way into saying I do think there is a threshold at which the metastatic logic of the current AI moment—bigger, bigger, bigger, bigger, bigger, all the water, all the energy, all the data, all the compute—is gonna give out.

You have climate thresholds. You have thresholds on how many data centers you can build. Does that mean there’s actually sort of a faltering in the AI market? I don’t know. I think one of the reasons we’re seeing this kind of glassy-eyed embrace of so-called AI agents is that they’re really trying to find this consumer market fit. In a sense that is just new branding. Wrapping everything we used to call “assistants” in a new brand term and kind of hoping we can make that work. But there is a point at which the trillions of dollars that are being poured into AI, you know, either will or will not find that killer application and begin to break even.

We have not reached that point. And so I think that there is a threshold coming where the market will rein in the leash, and I don’t actually know what happens then. Do we just shift what’s under the hood of the definition?

Move the goalposts?

Yeah.

Now I wanna talk about Signal, which seems like a relevant thing to discuss with you.

I love talking about Signal.

Good. I’m gonna start by bringing AI into it, because I remember we had coffee six months ago and I remember you talking to me about the risks of this moment in AI for Signal, particularly with agentic AI. What does all of this mean for Signal?

The first answer to that is we are seeing steady growth. I think we are in a moment where the stakes around privacy and private data are coming home to people on a personal, emotional, nervous-system level. It’s not an abstract concern that you gotta get your laundry done and then maybe you can think about that. It’s data breach, data breach, Salt Typhoon, geopolitical fracturing, what have you.

So I would say it’s a good moment for Signal, and we are very proud that we are there and ready, and anyone who needs us can download it and use it. This is infrastructure that is in place and ready to meet the moment. On the other hand, there are real dangers to Signal in this environment. Particularly, there is, I would say, misguided or malignant legislation that is continuously proposed that would ultimately make it impossible for Signal to provide the robust privacy guarantees that are our entire thing. Undermining encryption, giving governments access, things that just completely nullify the point of Signal’s end-to-end encryption. There are AI agents that do things for you autonomously, and in order to do that need access to all of your data, your apps, et cetera, in a way that is pretty unprecedented.

The issue for Signal is these agents are increasingly being integrated into the operating system—whether it’s iOS or Android or Windows—in ways that, in order to work, they need access to your data. So you have these systems that are offering to, say, book a restaurant for your friends and tell your friends the restaurant is booked and then you put it on all your calendars, right?

Yeah.

In order to do that, they need access to your browser, access to your calendar. They can see everything else that’s going on there, who you’re talking to. Access to your credit cards to put the down payment down for the restaurant. Access to your, let’s say, Signal in order to message your friends, access your group chats, text your friends on your behalf.

So ultimately what we’re talking about is a backdoor, and we’ve already seen this in the context of WhatsApp: There was a report from Lumia, which is a security research forum that showed that, you know, WhatsApp messages were being sent to Siri and back as part of the latest Apple Intelligence rollout. To me, that is nullifying WhatsApp’s, end-to-end encryption promises.

Given that environment, how does Signal ensure that what happened with WhatsApp doesn’t happen with Signal?

Signal is doing whatever we can. You know, the future of total infiltration and privacy nullification via agents on the operating system is not here yet, but that is what is being pushed by these companies without the ability for developers to opt out.

So we have spoken out very strongly about the existential threat that this provides to Signal and application layer privacy generally. What we’re calling for is very clear developer-level opt-outs to say, “Do not fucking touch us if you’re an agent.” Signal is entrusted with some of the most high-stakes communications in the world, like real life-or-death communications—militaries, governments, every boardroom, human rights workers, dissidents.

Ultimately, if you have high-stakes communication, you are almost certainly on Signal. That includes many of the executives at these companies. That includes many of the governments that they work with. So we believe there is a window of interest convergence here where this is not a gamble we should be making.

I hate to read your own writing back to you, but I will. In 2021, you coauthored a piece in The Nation where you warned at length about the risks of Big Tech being leveraged by authoritarian forces. There were a lot of lines that stood out to me, but I’m gonna read this one to you: “The neoliberal bargain is fraying, and if we don’t vie for control over the algorithms, data, and infrastructure that are shaping our lives, we face a grim future. It is time to rally behind a militant strategy that recognizes the danger of leaving US tech capitalists at the helm of systems of social control while far-right authoritarians jockey for access.”

I read that earlier this week, and in the context of what’s happening in the US right now, it was jarring to read that four years later. What’s your assessment of the state of the tech industry now vis-à-vis the administration? What stands out to you? What worries you?

This hearkens back to my longstanding commitment to open source, to transparency. I don’t think it should be an idea. I think it should be common sense that the core infrastructure that our global economies, governments, social relationships, increasingly rely on should be open for scrutiny, should have some form of Democratic governance. They should not be in the hands of concentrated entities, whether governments or companies, and I think that that remains true.

That’s one of the reasons Signal is very staunchly open source. We do not want you to take my word for it.

Yep.

So I think it’s odd to me that we’re now in a situation where we’ve become numb to what should not be easily accepted, which is that government infrastructure, our communications, our intimate communications, our lives and social relationships, are observed, surveilled. And determinations about them are made by a handful of companies that shape our access to resources and opportunities. This is why I’ve spoken out about the surveillance business model, which has persisted for a long time, but is being supercharged by the desire for data that AI implies and the production of data that AI also executes.

One of the areas that WIRED has been covering a lot this year is DOGE, and how DOGE and federal agencies in the US are using AI and fast-tracking the implementation of AI technology into federal agencies. From identifying spending cuts to potentially aiding the detection of illegal immigrants. Now that that technology is so much more embedded in federal agencies, what is important for people to be aware of? What should the average American be thinking about right now when it comes to their data?

It is a bit scary. These systems are often inscrutable by design. They’re non-deterministic, so you can’t really know why a certain output was generated at that moment in relation to something. They don’t reason outside the distribution, which is a fancy little way of saying if it’s not in the data that they were trained on, they’re not gonna be able to detect it.

Beyond that, there’s an epistemic concern that we’re trusting these determinations to systems that will give a very confident, plausible-seeming answer, right? That we can’t necessarily scrutinize as true or false, but they will have the power and authority of a final truth claim.

Those systems are being developed by, again, a handful of companies behind veils of trade secrecy, not knowable by the public, you know? There’s really no way to know. Was that an AI system rigorously trained, kind of making a determination that we can’t scrutinize, or was that a guy who’s always wanted that to happen, who can now attribute the right desire to a smart machine that we are not allowed to question, and cannot scrutinize?

Exactly. Well, I thought you might make it sound even scarier and you did. So congratulations.

Um, you’re welcome.

I have to ask you about a story that came up earlier this year. The Atlantic editor in chief, Jeffrey Goldberg, was accidentally included in a Signal thread about a highly sensitive military operation along with a ton of senior administration officials. Incredible scoop, but ultimately a horrible and tragic situation. People died as a result of that military operation, which I think is something we don’t talk about enough in the context of that story. Tell me a little bit about how you all reacted to that in real time. I’m sure you remember where you were. It was that kind of story.

I was at my kitchen table in Paris, where I have a place. Someone in the group chat sent it and I opened it and I read it.

So you guys didn’t have a heads-up?

No, we didn’t get a heads-up, no.

OK.

I certainly wouldn’t have predicted that Signal would become a main character. It’s like, did the road the car drove on become the main character? So I read it, and I remember reading it, and I always have a bunch of tabs up, so there was a real quotidian kind of like, “OK, I read that article …” and I got up to get a glass of water, and then I was like, “Wait, what?” I sat back down and read it again because I was like, no fucking way.

I was waiting for the kind of “gotcha!” Or the part where I was like, “Oh, OK. It was a test, it was a simulation, it was a war game.” It just picked up. It was like a flywheel and suddenly the press inbox was filling up. We’re distributed across multiple time zones, so I have people waking up in California, I have people in other places, and we’re sort of huddling in group chats and little war rooms and we’re like, “OK, well, what do we do now?” And the determination we made was like, “Look, we’re not really part of this story. We’re the infrastructure that was used. We can’t take responsibility for some guy’s thumbs, right? Or …

Intellect.

Yeah. Our lane is narrow and deep. We build the world’s most private communication platform, and that’s what we can speak to, and we don’t fucking know about the rest of it. We don’t want to be part of the story, but what we cannot afford, and what the people who rely on us really can’t afford, is for some misunderstanding around Signal that would paint it as somehow vulnerable or somehow the problem at the center of this. We can’t afford for that to catch wind in a way that would make people believe that Signal wasn’t secure and switch to less secure alternatives and potentially be harmed.

In the meantime, we were seeing the biggest growth moment we’ve seen in the US.

You had this massive spike in downloads, right?

Massive spike, like the biggest we’ve seen in the US, and it has kicked off a global growth moment. So thankfully we have the server capacity to handle that, and we’re getting the kind of publicity that—I don’t think I’ve ever lived through that, like in any organization I’ve been a part of.

That was in March. Where has that surge of attention left you now that the dust has settled a little bit?

Our growth continues to be up. It wasn’t just the US. We’ve seen global growth in little spurts. We’ve seen growth in Europe, growth in South America, some growth in South Asia. So I think there’s sort of a nexus of issues. SignalGate brought it to public attention. So, it’s easier if I’m talking to my dad or, you know, my dad’s friends to say, “Yeah, use Signal.”

There’s name recognition.

I don’t have to go into a three-minute preamble about what it is and why it’s good. There’s a commonsense understanding, and we’re really happy about that. And then that commonsense understanding is being met with a moment where people are feeling in a much deeper, much more personal way why privacy might be important.

And I do think a number of the big companies showing themselves willing to just bow to the political winds [is a factor]. Like, here’s my makeover. Now I’m this guy.

Masculinity, love it.

Love joshing around on the judo floor with my guys.

Sorry, please don’t mistake me for one of those with that convincing bit.

[Laughs]

Five years ago, they were one thing, now they’re another thing. What are they gonna be in four more years? I think that demonstration has not imbued much trust, irrespective of where you stand politically.

Yeah. In four years, if they’re surfing with Gavin Newsom …

They’ll be like wakeboarding …

… they’re still controlling your data. Speaking of wakeboarding and judo, and this is my last question for you, but this is particularly interesting to me as a woman running WIRED: You are a woman in the tech industry. You’re a prominent critic of the tech industry, which of course is largely run by men doing judo, et cetera. We know exactly what kind of guy we’re talking about. What’s hard about that for you? What’s gotten easier, if anything? Was it ever hard for you?

I would say what I don’t like about it is anytime you have a clubby little mindset, you have a lot of bullshit. You have guys just like sniffing the hype glue, and kind of believing in it. They all wanna be liked, they all wanna be included.

Yeah.

They’re pretty emotional ultimately. You don’t wanna be left outta the bro crew. That leads to a lot of this sort of vapid seriousness, a kind of messiness. This is serious, right? If you’re providing core infrastructure that people’s lives depend on, there needs to be some dignity behind that. We need to accept that we need to be empirical, we need to be grounded, we need to be steady, we need to be clear-eyed. This sort of hype, “hope-people-believe-it-so-we-can-exit-before-shit-hits-the-fan” mentality is just a floppy and undignified look, frankly. So that’s the culture that I really dislike. It’s not cool.

Get it together, guys. It’s not cool.

I wanna play a little game to wrap up. We play this every time. It’s called Control, Alt, Delete. I want to know what piece of tech you would love to control, what piece you would alt, so alter or change, and what you would delete. What would you vanquish from the earth?

Hmm.

I know.

All right. Control. Well, I wouldn’t want to be personally in control. I would like the, you know, let’s say the core operating systems, the core infrastructures or the core libraries to be controlled and stewarded. Let’s say stewarded, by a well-resourced group of people who are working in the public benefit to make sure it is as robust and secure and fit for purpose as possible.

Beautiful idea. Alter?

Alter transportation infrastructure. Moving away from an individual-car-based system to something that is, again, more fun, more workable. So yeah, let’s say transportation infrastructure.

What are we deleting?

I really wish I didn’t have to carry smartphones everywhere.

Oh, that’s a big one.

Yeah. It doesn’t mean I don’t use the things that run on a smartphone, but the way that life has been shaped around having to always be available, the assumption that we all make about each other, that if the text doesn’t come back in three seconds, someone must be lost in the mountains or really mad at you.

Right. Whatever you could do to strip that away and make life a little more spacious, I think would be lovely.


How to Listen

You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.

Related Posts

Leave a Reply