Google is bringing Gemini, its generative AI, to all cars that support Android Auto in the next few months, the company announced at its Android Show ahead of the company’s 2025 I/O developer conference.
The company says adding Gemini functionality to Android Auto and, later this year, to cars that run Google’s built-in operating system, will make driving “more productive — and fun” in a blog post.
“This is really going to be, we think, one of the largest transformations in the in-vehicle experience that we’ve seen in a very, very long time,” Patrick Brady, the VP of Android for Cars, said during a virtual briefing with members of the media ahead of the conference.
Gemini will surface in the Android Auto experience in two main ways.
Gemini will act as a much more powerful smart voice assistant. Drivers (or passengers — Brady said they are not voice-matching to whoever owns the phone running the Android Auto experience) will be able to ask Gemini to send texts, play music, and basically do all the things Google Assistant was already able to do. The difference is users won’t have to be so robotic with their commands thanks to the natural language capabilities of Gemini.
Gemini can also “remember” things like whether a contact prefers receiving text messages in a particular language, and handle that translation for the user. And Google claims Gemini will be capable of doing one of the most commonly-paraded in-car tech demos: finding good restaurants along a planned route. Of course, Brady said Gemini will be able to mine Google listings and reviews to respond to more specific requests (like “taco places with vegan options”).
The other main way Gemini will surface is with what Google is calling “Gemini Live,” which is an option where the digital AI is essentially always listening and ready to engage in full conversations about … whatever. Brady said those conversations could be about everything from travel ideas for spring break, to brainstorming recipes a 10-year-old would like, to “Roman history.”
If that all sounds a bit distracting, Brady said Google believes it won’t be. He claimed the natural language capabilities will make it easier to ask Android Auto to do specific tasks with less fuss, and therefore Gemini will “reduce cognitive load.”
It’s a bold claim to make at a time when people are clamoring for car companies to move away from touchscreens and bring back physical knobs and buttons — a request many of those companies are starting to oblige.
There’s a lot still being sorted out. For now, Gemini will leverage Google’s cloud processing to operate in both Android Auto and on cars with Google Built-In. But Brady said Google is working with automakers “to build in more compute so that [Gemnini] can run at the edge,” which would help not only with performance but with reliability — a challenging factor in a moving vehicle that may be latching onto new cell towers every few minutes.
Modern cars also generate a lot of data from onboard sensors, and on some models, even interior and exterior cameras. Brady said Google has “nothing to announce” about whether Gemini could leverage that multi-modal data, and that “we’ve been talking about that a lot.”
“We definitely think as cars have more and more cameras, there’s some really, really interesting use cases in the future here,” he said.
Gemini on Android Auto and Google Built-In will be coming to all countries that already have access to the company’s generative AI model, and will support more than 40 languages.
Sean O’Kane is a reporter who has spent a decade covering the rapidly-evolving business and technology of the transportation industry, including Tesla and the many startups chasing Elon Musk. Most recently, he was a reporter at Bloomberg News where he helped break stories about some of the most notorious EV SPAC flops. He previously worked at The Verge, where he also covered consumer technology, hosted many short- and long-form videos, performed product and editorial photography, and once nearly passed out in a Red Bull Air Race plane.