WhatsApp Is Gambling That It Can Add AI Features Without Compromising Privacy

whatsapp-is-gambling-that-it-can-add-ai-features-without-compromising-privacy

The end-to-end encrypted communication app WhatsApp, used by roughly 3 billion people around the world, will roll out cloud-based AI capabilities in the coming weeks that are designed to preserve WhatsApp’s defining security and privacy guarantees while offering users access to message summarization and composition tools.

Meta has been incorporating generative AI features across its services that are built on its open source large language model, Llama. And WhatsApp already incorporates a light blue circle that gives users access to the Meta AI assistant. But many users have balked at this addition, given that interactions with the AI assistant aren’t shielded from Meta the way end-to-end encrtyped WhatsApp chats are. The new feature, dubbed Private Processing, is meant to address these concerns with what the company says is a carefully architected and purpose-built platform devoted to processing data for AI tasks without the information being accessible to Meta, WhatsApp, or any other party. While initial reviews by researchers of the scheme’s integrity have been positive, some note that the move toward AI features could ultimately put WhatsApp on a slippery slope.

“WhatsApp is targeted and looked at by lots of different researchers and threat actors. That means internally it has a well understood threat model,” says Meta security engineering director Chris Rohlf. “There’s also an existing set of privacy expectations from users, so this wasn’t just about managing the expansion of that threat model and making sure the expectations for privacy and security were met—it was about careful consideration of the user experience and making this opt-in.”

End-to-end encrypted communications are only accessible to the sender and receiver, or the people in a group chat. The service provider, in this case WhatsApp and its parent company Meta, is boxed out by design and can’t access users’ messages or calls. This setup is incompatible with typical generative AI platforms that run large language models on cloud servers and need access to users’ requests and data for processing. The goal of Private Processing is to create an alternate framework through which the privacy and security guarantees of end-to-end encrypted communication can be upheld while incorporating AI.

Users opt into using WhatsApp’s AI features, and they can also prevent people they’re chatting with from using the AI features in shared communications by turning on a new WhatsApp control known as “Advanced Chat Privacy.”

“When the setting is on, you can block others from exporting chats, auto-downloading media to their phone, and using messages for AI features,” WhatsApp wrote in a blog post last week. Like disappearing messages, anyone in a chat can turn Advanced Chat Privacy on and off—which is recorded for all to see—so participants just need to be mindful of any adjustments.

Private Processing is built with special hardware that isolates sensitive data in a “Trusted Execution Environment,” a siloed, locked-down region of a processor. The system is built to process and retain data for the minimum amount of time possible and is designed grind to a halt and send alerts if it detects any tampering or adjustments. WhatsApp is already inviting third-party audits of different components of the system and will make it part of the Meta bug bounty program to encourage the security community to submit information about flaws and potential vulnerabilities. Meta also says that, ultimately, it plans to make the components of Private Processing open source, both for expanded verification of its security and privacy guarantees and to make it easier for others to build similar services.

Last year, Apple debuted a similar scheme, known as Private Cloud Compute, for its Apple Intelligence AI platform. And users can turn the service on in Apple’s end-to-end encrypted communication app, Messages, to generate message summaries and compose “Smart Reply” messages on both iPhones and Macs.

Looking at Private Cloud Compute and Private Processing side by side is like comparing, well, Apple(s) and oranges, though. Apple’s Private Cloud Compute underpins all of Apple Intelligence everywhere it can be applied. Private Processing, on the other hand, was purpose-built for WhatsApp and doesn’t underpin Meta’s AI features more broadly. Apple Intelligence is also designed to do as much AI processing as possible on-device and only send requests to the Private Cloud Compute infrastructure when necessary. Since such “on device” or “local” processing requires powerful hardware, Apple only designed Apple Intelligence to run at all on its recent generations of mobile hardware. Old iPhones and iPads will never support Apple Intelligence.

Apple is a manufacturer of high-end smartphones and other hardware, while Meta is a software company, and has about 3 billion users who have all types of smartphones, including old and low-end devices. Rohlf and Colin Clemmons, one of the Private Processing lead engineers, say that it wasn’t feasible to design AI features for WhatsApp that could run locally on the spectrum of devices WhatsApp serves. Instead, WhatsApp focused on designing Private Processing to be as unhelpful as possible to attackers if it were to be breached.

“The design is one of risk minimization,” Clemmons says. “We want to minimize the value of compromising the system.”

The whole effort raises a more basic question, though, about why a secure communication platform like WhatsApp needs to offer AI features at all. Meta is adamant, though, that users expect the features at this point and will go wherever they have to to get them.

“Many people want to use AI tools to help them when they are messaging,” WhatsApp head Will Cathcart told WIRED in an email. “We think building a private way to do that is important, because people shouldn’t have to switch to a less-private platform to have the functionality they need.”

“Any end-to-end encrypted system that uses off-device AI inference is going to be riskier than a pure end to end system. You’re sending data to a computer in a data center, and that machine sees your private texts,” says Matt Green, a Johns Hopkins cryptographer who previewed some of the privacy guarantees of Private Processing, but hasn’t audited the complete system. “I believe WhatsApp when they say that they’ve designed this to be as secure as possible, and I believe them when they say that they can’t read your texts. But I also think there are risks here. More private data will go off device, and the machines that process this data will be a target for hackers and nation state adversaries.”

WhatsApp says, too, that beyond basic AI features like text summarization and writing suggestions, Private Processing will hopefully create a foundation for expanding into more complicated and involved AI features in the future that involve processing, and potentially storing, more data.

As Green puts it, “Given all the crazy things people use secure messengers for, any and all of this will make the Private Processing computers into a very big target.”

Related Posts

Leave a Reply