All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Learn more.
The smartphone has become the playground for new AI and generative AI features.
Apple made a significant push last year with Apple Intelligence, featuring tools like Image Playground, which allows you to create images from scratch, and Writing Tools that can rewrite and summarize text. On the latest iPhone 17 running iOS 26, machine intelligence powers the new live translation features in calls and messages. Google has many of the same features on Android; the latest Pixel 10 phones can generate a version of your voice for use in real-time language translations on calls.
As WIRED’s resident smartphone reviewer, I’ve tested all of these phones and their hyped-up features. Very few of these capabilities have really felt like a practical, useful feature designed to make everyday life easier—something I could even see my parents using. That’s what AI is supposed to do, right?
That’s until I tried Google’s new Ask Photos conversational editing feature in Google Photos which first debuted on the Pixel 10 phones and is now available on Android devices that can support it. The feature lets you type or speak out the visual edits you want to see in your photos without fumbling with menus and sliders. Most people have no idea how powerful the software on their phones already is, and so by being able to access all the editing tools that are available and use them to execute your desired task, this feature not only gives you the results you want in a nearly frictionless way, but it also helps you better understand what your smartphone is capable of.
Speak Your Mind
The idea of talking to a computer and having it complete tasks for you has been around for decades. Hollywood has its own idea of what this looks like (HAL 9000 in 2001: A Space Odyssey is perhaps the most iconic—and dark—depiction) but researchers have another.
A prototype app called Pixeltone developed by Adobe Research and the University of Michigan showed the possibility of using voice control and touch for photo editing. The top comment on the YouTube video demonstrating the capability is this one, left by a viewer 12 years ago: “Why so much hate? It isn’t for the “real” photographer, but for my dad, that sometimes uses Photoshop; this is great.”
The democratization of powerful photo editing tools has clear dangers, like the ease with which bad actors can use them to propagate disinformation and manipulate the truth. But most of today’s editing tools require users to actively seek them out and require skill to use effectively. Google’s conversational editor is different. It’s powerful, simple, and controlled by plain English. And it’s one tap away in your Google Photos library.
“For many people, ChatGPT is a fun novelty,” says Chris Harrison, director of the Future Interfaces Group at Carnegie Mellon University. “Some people have adopted it into their workflows, but for the vast majority of people, it’s a novelty.” Harrison believes Google’s new editing tool will be used far more widely—at least by anyone savvy enough to use an Instagram filter. ”AI should be making things easier to use, and this is a great example consumers will have a genuine interest in.”
Clear signposting makes Google’s photo editor more accessible. Many AI chatbot interfaces start with a blank textbox that offers little insight into their capabilities, and that’s no help to people who are unsure where to start. But having the conversational editor pop up as soon as you tap “edit” on Google Photos makes it immensely easier to use because it’s right there after you’ve already established context that you’re editing a photo. “Human laziness always wins,” Harrison says.
Google via Julian Chokkattu
Google via Julian Chokkattu
You’ve always been able to go into Adobe Photoshop and paint out a street lamp from a photo, but Photoshop subscriptions are pricey, and the tools require a base-level understanding of photo editing, not to mention familiarity with Photoshop’s capabilities. “People probably wanted this feature beforehand, but didn’t want to have the cost of going into Photoshop and blowing half an hour to modify one photo.”
Google’s conversational editor goes past the usual edits like fixing the lighting, erasing plastic trash bags from the background, and cropping. You can ask it to “Add King Kong climbing the Empire State building,” and voila. It can erase people from photos.
That brings us back to the threats of manipulation that these generative AI features present. Harrison acknowledges the pushback, but believes it will largely blow over.
“That’s what people have been doing with their smartphone-captured photographs since the beginning of time,” he says. “If anyone thinks Instagram is real life, they’re in for a rude awakening. This is just a new tool; it’s not a new concept, it’s just a more powerful version of what has existed.”
To address these concerns, images edited with Google’s new tool have C2PA content credentials, IPTC metadata, and SynthID to watermark and log the use of AI in media and trace the file’s origin. These steps make it clear to other image editing software and diagnostic tools that the photos have been edited.
Conversational Editing
Photograph: Julian Chokkattu
Editing pictures on a smartphone isn’t very fun. There are multiple tabs you have to swipe through, and sliders can be hard to precisely move with your finger. Google has experimented with AI-powered edits before—a single tap to have the algorithm edit the photo to what it thinks you want—but the results can be hit or miss.
With conversational editing, you’re in control. Just tell the textbox, either via voice or typing, what you want to see in the image. And if you don’t know the words to use, I’ve been experimenting with “Make it look better” and gotten pretty good results. I’ve seen the tool adjust crops, improve lighting, and even add a portrait blur effect. “Fix the lighting” or “remove the reflections” also work really well.
Google via Julian Chokkattu
Google via Julian Chokkattu
The tool isn’t perfect. It can’t perform some actions, like move subjects around the frame, and edits are unilaterally applied to the whole image. For example, when editing a portrait of my wife, I wanted to retain the stark shadows on her body but bring down the highlights on her face. Google Photos just reduced the highlights across the board, ruining the shadows in the bottom half of the frame (though it did improve the lighting on her face). Unlike Lightroom or Photoshop, where you can control exactly where you want to adjust these parameters, you’re limited by the editing capabilities of Google Photos.
Got an unsightly plastic bag in the photo? Ask to remove it. Is the photo too cropped in? You can ask to expand it a bit more, and Google will use generative AI to fill the new extra space with what it thinks should be there (with varying degrees of success). If you don’t want to use these generative AI editing features, you don’t have to.
Perhaps most impressive when I asked it to “restore” a photo from when I was a baby. It cleaned up the image, improved colors, and boosted contrast. Could I have done it myself? Sure, but it would have taken me several minutes, and this was done in seconds.
Google via Julian Chokkattu
Google via Julian Chokkattu
All of these capabilities point to the next leap in how we interact with computers. “Photoshop is a tool,” Harrison says. “I’m using it as a very powerful tool with maybe a sprinkling of AI features. But computer scientists have been really thinking about this for the past half-century: When is this change going to happen from computers as tools to computers as partners, and it’s a really seminal shift in how we think about computing.”
-
Photograph: Julian Chokkattu
-
Photograph: Julian Chokkattu
-
Photograph: Julian Chokkattu