Google just held its big I/O developer shindig and product announcement event today in California. As it has for the past couple years, Google continued to go all in on AI. In an I/O keynote address, Google executives announced new AI features coming to absolutely everything. It’s in Android, Search, Gemini, and—sooner or later—a pair of Google’s smart glasses.
We’ve already blogged all about the event as it happened live. Here’s everything Google announced at I/O.
Gemini Juices Up
Google Assistant, the occasionally useful digital servant at the bottom of your phone, is being replaced, more or less everywhere, by Google’s AI usurper, Gemini.
Google announced a slew of updates to Gemini, the most impressive of which is Gemini Live. This new feature combines input from your phone’s camera, voice commands, and an agent-like ability to search the web, make phone calls, and collate information for you. It’s an extension of the experiments we saw last year that were code-named Project Astra, where Google’s machine intelligence engine can describe what it sees through a connected camera, remember key facts about your environment, and do hands-free tasks that you ask it to do by chatting with it in a natural way.
Gemini is making its way into Google’s suite of productivity apps too, the most illustrative of which is a new feature in Gmail called Personalized Smart Replies. It uses AI to absorb your personal writing style and preferred syntax—culled from all of your notes, emails, docs, and spreadsheets—and use it to generate long replies to emails that match your personal voice.
“With personal smart replies, I can be a better friend,” Alphabet CEO Sundar Pichai said onstage during his keynote, as he explained how he uses the feature to draft emails to questions from friends that he’d otherwise skip because he’s too busy being a CEO.
For even more about Google’s new Gemini features, read Will Knight�’s WIRED interview with Google’s DeepMind CEO Demis Hassabis.
There’s a new subscription tier for Gemini power users.
Courtesy of Google
Some of these Gemini features will be coming to users of Android and Google’s web apps for free, but others (and the more powerful feature sets) will be available via paid subscription. Google’s $20 a month AI Premium service has been renamed to Google AI Pro, and the cost stays the same, though it now comes with more limited features. Google AI Ultra, the company’s full suite of AI services, has increased to $250 per month. That’s $50 per month more expensive than OpenAI’s similar full-suite plan, ChatGPT Pro
Gemini Is an Artist, Actually
Creative professionals and programmers take note: Google’s enhancements to its creative tools will either make your job easier and more productive, or it will render you obsolete.
Jules is an “asynchronous coding agent” that aims to let you take a rough design scribbled on a napkin and turn it into a full-fledged illustration or code, while showing you the work it did along the way.
There’s also a new version of Google’s AI image generator called Imagen 4 that Google claims can generate much more detail in images, like textures in AI-generated paintings or custom text in music posters.
Courtesy of Google
Courtesy of Google
Google also has some new generative AI video tools, like Flow, a tool made specifically for AI movie creation. It lets you upload photos or illustrations of characters, props, or scenery, then animate it all into a short movie using text prompts. If you don’t have photos, you can just type a generative prompt to make the visuals right inside Flow. Then you build a narrative video scene by scene by describing the action in a text box. The company illustrated Flow by showing a generated video of an old man using a giant chicken in the backseat to make his car fly. The video didn’t look that great, and weirdly plastic, but it got the point across.
Also included in the update is an enhanced video generator called Veo 3 that Google says has a better understanding of material physics for smoother and more lifelike animations.
Search Goes Full AI Mode
Last year at I/O, Google unleashed its AI Overviews enhancement to search results, a feature that summarizes results from across the web at the top of the screen for some queries. The results were famously varied, from being just plain busted to having hilarious hallucinations to showing actual plagiarism. Nevertheless, Google is now giving its search experience an even shinier AI sheen.
To that end, Google is making search much more chatbot-oriented with its new AI Mode. This search feature was first announced in March 2025 as an experiment, and now it’s available within the default Google search experience for everyone in the US. AI Mode appears in a tab within your search results, so you can switch over to it with a click if it’s available.
Google says AI Mode is designed to answer more complicated search queries that can factor in a variety of questions. It won’t give better results for everything, but trickier queries should get more satisfying results.
The new AI Mode will also serve as a shopping assistant, where the search tool can help you shop for clothes and then, after you upload a photo of yourself, show you what it would look like if you wore it in a “virtual try-on” image. This experience is rolling out as a Labs feature, so it’s still an experiment, but Google says it will continue to develop the tech. It works for more than clothes, too. If you’re shopping for a rug, for example, you can ask for one that’s good for kids or pets, and use augmented reality to see how it might look in your room.
Read more about all the new stuff coming to AI Mode in Reece Rogers’ WIRED story.
Android XR Looks Ahead
Courtesy of Google
Like nearly every tech company out there these days, Google is investing heavily in its glasses game. Under the purview of its Android XR efforts, Google is simultaneously working on full-size mixed-reality headsets and slimmer eyewear that look just like regular glasses.
The company held a live demo showing off the capabilities of its Android XR glasses. While onstage, a pair of Googlers wore prototypes of the glasses around the event, streaming an augmented-reality display that showed texts, maps, and pictures, all of which appeared smack dab in the center of the wearers’ vision. Aside from some hiccups in the streaming and an awkwardly stilted demo of the platform’s live language translation abilities, the rest of the features seemed to work fairly well. It’s not yet clear how bulky the glasses will be, but the prototypes were largely indistinguishable from regular glasses, if a little chunky.
The live Android XR prototype demo in action.
Photograph: Julian Chokkattu
As it makes the move to (hopefully) more ergonomically pleasing glasses, Google announced partnerships with Gentle Monster and Warby Parker to help design its upcoming spectacles. The company also says that it now has hundreds of software developers building for the Android XR platform.
Google also says that its partner Samsung’s Project Moohan mixed-reality headset, which was announced last year, will be available later this year. There’s no word yet on pricing or availability.
Disaster Averted
At the very end of the presentation, Google’s CEO announced a couple different efforts to battle climate-fueled disasters. (Maybe it’s a way to not-quite atone for all the energy its AI efforts gobble up.)
Fire Sat is a proposed satellite constellation that Google plans to launch over the next several years that aims to use AI to spot wildfires in the earliest stages. Google says the system can detect fires as small as 270 feet, though for now it only has one satellite in orbit.
Pichai also touted Wing, a drone delivery service that was used to deliver medicine and supplies during Hurricane Helen. Pichai said he hopes to scale up these efforts in the future.