In the past six months, OpenAI, Anthropic, Google, and others have released web-browsing agents that are designed to complete tasks independently, with only minimal input from humans. OpenAI CEO Sam Altman has even called AI agents “the next giant breakthrough.” On today’s episode, we’ll dive into what makes these agents different from other forms of machine intelligence and whether their capabilities can live up to the hype.
You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Will Knight on Bluesky at @willknight. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Michael Calore: Hi, this is Mike. Before we start, I want to take the chance to remind you that we want to hear from you. Do you have any questions around AI, politics, or privacy that have been on your mind? Or just a topic that you wish we talked about on the show? If so, you can write to us at uncannyvalley@WIRED.com. That’s uncannyvalley@WIRED.com. If you listen to and enjoy our episodes, please rate the show and leave a review on your podcast app of choice. It really helps other people find us. Hey, everybody, how you doing?
Lauren Goode: I’m good. I’m hanging in. How are you, Mike?
Michael Calore: I’m great. I feel good. We have Will Knight on the show this week.
Lauren Goode: Yeah, I feel great about that. I do miss Katie, but we know that Katie has gone off in search of more French butter. Literally, she has gone to France.
Michael Calore: I think, Will, you brought us croissant, did you not?
Will Knight: I did. I sure did, yes. Freshly baked.
Lauren Goode: Thank you for that. You can stay on the show now.
Michael Calore: This is WIRED’s Uncanny Valley, a show about the people, power, and influence of Silicon Valley. Today, we’re talking about AI agents and why so many tech companies have been betting big on them. In the past six months, OpenAI, Anthropic, Google, and others have released web browsing agents that are designed to complete tasks independently with only minimal input from humans. OpenAI CEO Sam Altman has called AI agents “the next giant breakthrough.” By one estimate, nearly half of technology companies are already adopting or fully deploying artificial intelligence agents. This week, we’ll dive into what makes these agents different from other forms of machine intelligence, and whether their capabilities can live up to the hype. I’m Michael Calore, director of consumer tech and culture here at WIRED.
Lauren Goode: I’m Lauren Goode, I’m a senior correspondent at WIRED.
Will Knight: And I’m Will Knight, a senior writer at WIRED.
Michael Calore: I want to start us off by asking if either of you have had agentic experiences in your life lately?
Lauren Goode: I did have an experience recently with, do you remember Google Duplex?
Michael Calore: Sure.
Lauren Goode: OK. That was an early version of this. It came out around 2018; we wrote about it at the time at WIRED. But I tried to make a dinner reservation recently at that place that you and I went to for your birthday. Vegan sushi. Already, this is the most San Francisco podcast ever. We’re talking about vegan sushi and how I dispatched, unknowingly, an AI agent to try to make the reservation for me.
Michael Calore: How was it unknowingly?
Lauren Goode: Because I just did the very human thing where I went to a website and typed in “would like to make a reservation for this number of people at this time.” Unbeknownst to me, it funneled it through the Google Assistant. Then it said, “We are trying the restaurant on your behalf.”
Michael Calore: Oh.
Lauren Goode: And it kept trying, and then it would send me a notification. Then it kept trying, and it would send me a notification. It was working behind the scenes to try to make a reservation. It did not successfully complete the reservation. Does that count?
Michael Calore: Yeah, that fully counts.
Lauren Goode: That’s an AI agent? It was working on my behalf.
Michael Calore: Yeah.
Lauren Goode: Will, what about you?
Will Knight: I’ve been playing with AI coding tools, and primarily Claude Code, which is I think a good example of an agent. Because it does all these fun things, like not just writes code, but it edits files, moves them around. Uses your terminal. Deletes stuff if you’re unlucky. That’s the agentic thing I’ve been playing with.
Lauren Goode: You’ve been vibe coding is what you’re saying.
Will Knight: I have, yeah. It’s been fascinating. Vibe coding is this term that a very well-known AI researcher, Andrej Karpathy, came up with to describe basically conjuring up whole programs, finished software, just by prompting a model. These models have always been able to auto-complete code, but they’ve gotten so good in the last year at coding that they can create whole projects involving lots of files and lots of folders, and everything. Actually, I started off doing it with my kid because I thought, “How do you teach a kid to make sense of AI?” And also, you’ve always thought it’s a good idea to get them to code. He loves games, so I thought, “Let’s code some games.”
Michael Calore: Well, my experience is far less technical, because I recently had to make a change to a plane reservation through the Southwest app.
Lauren Goode: I’m sorry.
Michael Calore: Yeah. Well, it was actually fine.
Lauren Goode: My condolences.
Michael Calore: It pushed me to the chatbot. It was like “talk to our chatbot if you need help. I was like, “Oh, this is going to be infuriating.” Actually, it was fine. It took five minutes. I gave it my confirmation number, my flight information. Then it said, “What change would you like to make?” I said, “I would like to do this.” They said, “We can do this for you with no fee.” I was like, “Great.” “Do you want to book it?” Yes. “OK, it’s booked. Here’s your new boarding pass. You need to check in, but here’s the link to check in and get your new boarding pass.” I was like, “Wow.”
Lauren Goode: Is that the agent we’re talking about? That’s a customer service agent.
Michael Calore: Right.
Lauren Goode: And it’s a bot replicating that. But this new era of agentic AI, tech companies raising millions of dollars and having valuations in the billions because they’re pivoting to agents, that’s not exactly what that is though, right?
Michael Calore: Right. That’s a good question for Will. Can we throw this one to you, Will? What’s the difference between an AI agent what we’re talking about and a traditional chatbot, which most people have experienced?
Will Knight: Right. The chatbot generates text and you can communicate with it through that very, very narrow domain of text. That’s been extraordinarily powerful. The revolution we’ve seen is built around these language models which can produce extraordinarily coherent and good answers to questions, that isn’t quite the same as an AI system that is inherently an agent. If you go back in the history of AI, actually, a lot of researchers have always worked on this idea that intelligence inherently involves agency. Humans are not things that just speak. They try and perform actions, you try and solve problems. That’s been a longstanding goal. It’s a weird quirk that we had these language models, which are so limited and kind of old-fashioned. A lot of what we’re seeing when it comes to agents out there are basically people taking language models and using the fact that they’re very general and very flexible to take a complex command and trigger some actions. The real leap that companies want to make is that the models themselves will decide and be able to formulate their actions themselves, more like an intelligent entity.
Lauren Goode: The human is still in the loop, but the human puts in a command or a prompt and then steps back. The agent is doing a bunch of AI stuff, basically while you go do other things. That’s the promise, that they’re going to be able to make us all hyper-productive.
Will Knight: Yeah. Ideally, you would have an agent that would call up the airline for you. You tell it what you want to do and it goes and does the negotiation for you. Or you can give it much more open-ended commands or what you’re trying to achieve and it figures out how to do it for you. The language models are so general, so capable that in theory, they ought to be able to do that. But there are some really key catches when it comes to how you try and replicate human agency, because there’s a lot more going on in the world than just language. Agency actually involves more than just trying to solve an individual problem because it involves other people, which makes it complicated.
Michael Calore: To say the least.
Will Knight: Yes.
Michael Calore: Speaking of the models, it seems as though all of Silicon Valley has invested in developing their own AI agents, but who are the main players right now?
Will Knight: The main players are the obvious ones. OpenAI, Anthropic, Google. I think Amazon is a good dark horse in this race. They hired some excels from OpenAI who made an agentic startup. They have a lab that they’re trying to build a lot of agents. One of the key things for these companies is where do you get the data from. They’re all desperately trying to figure out how you get data to train these agents. Instead of having just text, you want examples of things that people do. Amazon actually has quite a lot of good examples of that, of people doing stuff on its sites. They’re a good player.
Lauren Goode: Well, then in the coding world too, there are companies like Cursor. Tell us about that whole world, since you’ve been vibe coding with your son.
Will Knight: Yeah. Yeah, there are a bunch of startups who’ve really ridden on the backs of the other AI companies, but really focused on coding. I think just because that’s moved so quickly, they’ve been able to carve out these niches. OpenAI is widely rumored to be looking to acquire one of these companies, Windsurf, for about three billion. Which gives you an indication of how they’ve been able to make this really great business for themselves.
Lauren Goode: It seems like every briefing we are on these days, a company is talking about agents.
Michael Calore: Yeah.
Lauren Goode: It ranges, too. It’s every company from Nvidia, to Microsoft, to well-funded startups. Even just earlier this week, a company called Glean, I’m pretty sure you guys have heard of it. It’s a company that has built a search engine for enterprise apps. They just raised 150 million in a series F round of funding. They are now valued at $7.2 billion. Their whole thing now is agents. It’s all about agents.
Will Knight: I think the way to think about it is who is really building models that are trying to learn the general agency that you see in a language model, and that is OpenAI, these foundation model companies. You can go back to a previous era in AI when we had some of these game-playing AI systems, like AlphaZero. That used reinforcement learning, this process of experimenting. But it was fundamentally agentic. It only operates within a very limited world of a board game, but it’s taking actions, it’s figure out how to take those. What we’re seeing now with these AI companies mirrors that or is going back to that in a way, because they’re trying to get them to reason to a particular answer, to a particular end goal.
Michael Calore: If we think about what role AI agents could play in the longterm, a lot of the companies that we’re talking about are starting with customer service. They’re starting with, OK, you have a simple question, you need something simple done. I can go figure that out for you, I can go accomplish that for you. I think that’s where most of the early strides are being made. Gartner, the tech market researcher, has put out an estimate that AI agents will resolve 80% of common customer service queries by the year 2029. That’s four years from now, AI agents will be doing all of our customer service stuff for us. Based on some of these examples that we’ve been talking about, it does seem like the automation of tasks currently performed by employees who are human is a common thread here.
Will Knight: Yeah. I think the customer service is an obvious place to start because it involves taking what language models are brilliant at, which is talking to people and then trying to do a fairly limited number of things. A lot of these companies, the big AI players, are looking at going from coding to how you automate office work. There is a ton of repetitive tasks in office work. In theory, you could be able to use these models as the glue to figure out what someone’s trying to do and then replicate that. There’s already a huge amount of worry in the world of coding that this is going to eliminate lots of jobs. Then I think that there’s a good chance it’s going to come for much more routine white collar work as well.
Lauren Goode: Well, it’s been fun doing this podcast with you all.
Michael Calore: We’ve mostly been focusing on how AI agents go out and do things, like surf the web, or call a restaurant, or fulfill some sort of task in a Microsoft Office product for you. But I’m curious about what the world of agentic AI looks like in our homes, on our physical devices. This has been at the top of a lot of people’s minds recently because OpenAI just announced that it made a deal with Jony Ive, the former and longtime head of design at Apple, to acquire a new startup that he launched to build a new class of AI devices for the home. How do we feel about this development?
Lauren Goode: It’s hard to say because we don’t know what form it’s going to take exactly yet. This is I think representative of a new generation of startups and hardware startups that are going to exist to serve these AIs. They’re built from the ground up with the future of artificial intelligence and ambient computing in mind. We don’t know exactly what form factor it’s going to take yet. Whether it’s going to be on your desktop, or it’s going to be on your body, or maybe both. Well, one, I do appreciate, I believe it was Jony Ive who said some version of, “We can all agree, we’re a little too attached to our iPhone.” The thing that they’ve invented is now, “We have to solve for the thing that we invented.” But the idea is can AI actually do enough so that it takes us away from having to be heads down or faces in screens all of the time, actually performing the actions? Can technology do some of that for us? And then free us up for other things. That’s always, in Silicon Valley, been one of promises of AI. I remember talking to Astro Teller from Google X years ago and asking him, “Well, what does this mean for jobs?” And him basically saying, “Well, with AI, people are just going to level up.” That’s always been a really sunny Silicon Valley optimistic view of AI. I think that there will be a new category of jobs and also tasks created that we haven’t maybe even thought of yet, but we still just have no idea how it’s actually going to work day-to-day into our lives.
Michael Calore: Right. Because I feel like people have been trying this for a long time. Whenever we saw the demo for the next version of Siri, or the next version of Alexa, or the next version of Google Assistant, there was always a demo where they would ask the speaker, the voice-controlled smart speaker to order a pizza.
Lauren Goode: Yes, it was the pizza and the Uber, those were always the examples.
Michael Calore: These are the deals that the device makers made with the businesses and they coded that in. When you ask Alexa, “Can you order a pizza,” it wasn’t calling Dominoes and Alexa’s voice ordering your pizza for you. It was just doing it on the web in an app, through an API basically. I think when you talk about what the next generation of that is, it’s you can ask it to do anything. What is that thing you’re asking? It’s some device that sits next to your computer and your phone? Why does it have to be a separate device? Why can it just be your computer and your phone is the big question that I have. Do we need something that’s going to probably cost several hundred dollars that’s probably going to be very beautiful that does a lot of what the other two devices that we already own do? I don’t really know. Everybody’s excited about it though because it’s Jony Ive and Sam Altman teamed up.
Lauren Goode: Right. It just becomes the delivery mechanism for whatever this new era of computing is. I do think too, the examples we’re talking about are very much a Wall-E kind of future. How lazy can we possibly be while we dispatch AI and machine learning to do the things for us? Whereas there’s also going to be space for how much more can I learn? What kind of knowledge can I obtain? What can I do differently because AI is helping me? There’s been this promise on the consumer end for a long time of this is going to change your day-to-day life by tapping into these small inconveniences that we have now. But I think the real gains stand to be, one, I don’t mean to sound too optimistic about this because I’m not entirely, but can we close gaps of knowledge because AI is actually going to help us do that in some way?
Will Knight: Yeah. I think when it comes to OpenAI’s and Jony Ive thing, the models are getting increasingly really good at interpreting the visual world and interacting through speech. I think there’s this idea in the ethos that you can have completely new, a new interface, you can reimagine the interface entirely. You don’t have this graphical user interface. God knows what they’re going to come up. Google has demoed this with this Project Astra, where you wear these glasses and you can talk about what’s around you. I’ve tried that. It’s kind of impressive. Yeah, maybe there’s new ways you can order pizzas with it. But also, I can’t help thinking about the fact that they need the data for what people are doing increasingly to train these models as well. I feel like the hardware makers have so much interesting data. They don’t really tend to tap into it because of privacy issues, but maybe OpenAI will find a way. Then in the issue of how you train, the knowledge gap, I think that’s a really, really important point because automation often causes people’s skills to atrophy. You see it in autopilot in aircraft. Then you’re definitely seeing coders saying they’re losing skills because they rely too much on that. What does that mean for the workforce?
Lauren Goode: Right. If all of these AI agents free us up to do other things, what exactly are they going to free us up to do? How are we going to fill that time? In our ultra-capitalist, some people may call it end stage capitalist society, the answer is probably, “Well, you can just be more productive. You can just do more, more, more.” I think it’s going to be up to us, the humans, to push back against that and say, “Here’s how we actually want to fill our time.” There are also going to be these little bottlenecks for a while that I think are going to make these more complicated than anything else. When I was at Google IO a few weeks ago, I got a demo, Will, of their Project Mariner. Have you seen this yet?
Will Knight: Yeah, I’ve tried that. Yeah.
Lauren Goode: You have tried it? I don’t know if this was the demo they gave you, but the experience I saw, it scanned a cocktail recipe. Then it automatically just grabbed the ingredients from the cocktail and put it in an Instacart shopping cart. But then, it hit a bottleneck because there was no authentication for the Instacart account. It was like, “Oh, hold on, we have to log in.” Then some demo-er was trying to log into Instacart. “But look, it’s so seamless.” I’m like, “No, I literally could have just added that to Instacart.”
Will Knight: Yeah. Their examples always seem to involve buying something for some reason.
Michael Calore: Why do we think that there is such a laser focus on AI agents at the moment in Silicon Valley? Is it the Valley’s tendency to always just need a new goal to race towards? Is it indicative of something larger? Is it what VCs are most interested in funding? What’s the reason here?
Will Knight: From my perspective, one of the main reasons is that, going back to the history of AI, this is really the next step you would take in building something that’s getting closer to human-like AI or AGI, or whatever you want to call it. A lot of researchers are very interested in doing that. I think there is also a ton of potential. Chatting to a chatbot is useful if you want to do lots of things, answer lots of questions. But the idea that you could automate a lot of work or mundane chores is potentially valuable.
Lauren Goode: Yeah. I think that if you’re on the startup side of things, this is where the money is flowing right now. We’ve been in this era for a while where interest rates are really high. For some startups, it hasn’t been easy to raise capital. There’s been a bit of a pause on the IPO market that has cooled a bit. Although, now there’s signs that it’s picking up again. I think the threat of tariffs still looming over people’s heads in Silicon Valley. If you’ve been iterating on your software, and suddenly you can say to your would-be investors, “But, wait. We’re pivoting to agents. We’re doing this agentic AI. We’re going to be able to sell this software into other businesses, and also we’re going to streamline our own processes and reduce our own overhead costs.” It’s probably a path to raising right now. Is some of it hype? Sure, it is. But as Will is saying, because, Will, you’re so steeped in the research and development side of this, actually there’s some pretty notable advancements being made, too.
Michael Calore: Let’s take a break right now, and we’ll come right back. The development of agents is going forward full-throttle, but as we were just talking about, the current agents that we’re able to use have some significant limitations. But quickly, what are some of the other concerns behind the growth of AI agents? Not just about usability, but about the other things that they do.
Lauren Goode: Well, I think reliability is certainly one issue because they’re still subject to the same kind of hallucinations, and errors, and even sycophancy that the LLMs are. You’re also introducing much more complex queries into it because you’re not just asking it to run a search for you. You’re asking it to perform multiple steps under certain conditions. There are also concerns around high cost in compute power. Then there’s selling the product itself. The Information recently reported that OpenAI might charge up to $20,000 per month for some of its more specialized AI agents. Supposedly, these are agents that would be able to perform at the level of expertise of a PhD graduate, supposedly. On the one hand, that’s really, really expensive. On the other hand, our colleague Paresh Dave has been reporting how some people in the industry believe that’s actually not that much money. Because ultimately, if these agents replace jobs, which is what they might do, then you’re saving on all of those salary costs. It might actually be something of a bargain. Right now though, there’s a bit of sticker shock around that.
Will Knight: Yeah, I think that’s very true. There are a bunch of other issues, things that could go wrong and probably will go wrong. A bunch of researchers that I spoke to … I spoke to Zico Kolter at CMU who is also on the board of OpenAI who’s been looking at the security of agents, and just introducing agents that are going to, ideally in future, interact with other agents. You have this amplifying effect that could be very difficult to predict. As Lauren says, also the idea of these things buying things inherently makes them a much juicier target for hackers. If you have something like Mariner using the web, it’s really interesting just to think about the idea that people will maybe try and reinvent SEO to work for agents. One of the more fundamental problems I think is that these aren’t really actually replicating human agency. There’s a really interesting book on this, to go into the weeds, called The Evolution of Agency. Where if you look at animals and how their agency has evolved, when you get to humans, there’s a very fundamental difference in the way human agency works and it involves working with other humans. It involves collaborating and knowing that you often have to sacrifice things to work together. You have things like guilt, you have empathy that evolved. These models, we’re seeing examples of models doing really crazy things in contrived examples, where they’re told they’re going to be switched off and then they’ll blackmail someone. That is not actually necessarily an example of an evil model, it could be just an example of a model that hasn’t learned how we’d expect a person to behave all the time. Yeah, that could have a lot of unforeseen consequences.
Michael Calore: I think my biggest concern is something that you touched on, Lauren, which is the fact that, as the hype and the excitement around AI agents ramps up, that a lot of companies are going to be looking at them as a way to reduce labor costs. There’s nothing inherently wrong about reducing labor costs, but I feel like there is something inherently wrong about replacing an entire class of workers at a company with machines. It is quite literally dehumanizing. And I mean that not only in the jokey sense that you’re getting rid of humans, but also just that the people working there and the customers that are interacting with your company are now going to feel alienated from you because they no longer have that human interaction that they’re used to. We’re already seeing that happening. That’s the kind of thing that excites people who are very concerned about the bottom line, and worries people who are concerned about things income and equality, and all the problems that come along with this rampant push towards mechanization. We’ve been dealing with this as a society for 150 years, and I feel like as the technology gets more powerful, that those worries just accelerate.
Lauren Goode: In the past week alone, I’ve heard a venture capitalist and a CEO say in separate interviews, this quote about how “six out of 10 jobs, 60% of jobs that exist now didn’t exist back in 1940.” Things change, things shift, technology enables new kinds of jobs. Futurists, CEOs, venture capitalists, they love this because it creates this blank space for what we haven’t been able to imagine yet, in terms of what jobs will exist. Do I believe that there are some jobs that Will’s son who he’s vibe coding with right now could potentially have that we haven’t even thought of yet?
Michael Calore: Is everyone in Silicon Valley thinking about AI agents in the same way?
Will Knight: I don’t think they are. I think there’s a lot of enthusiasm, but there’s also a ton of different definitions. Recently, I spoke to Sonya Huang, a partner at Sequoia who tracks AI. She was saying that most agents startups, most of the companies offering agents, are really just rebranding existing AI tools as agents.
Lauren Goode: Yeah. To Will’s earlier point, there’s no real clear definition of what an AI agent is right now or what it’s supposed to be. There’s really a wide spectrum, so companies can move the needle and define it however they want to. Or say they’re offering it in a certain way. I think that we’re going to also see I think something maybe of a cottage industry pop up around the spaces in between agentic AI. Let’s say for example, there is an authentication bottleneck that happens when you dispatch an agent to complete a bunch of different tasks online through a bunch of different apps. There’s needs to be some kind of connection there between those apps, and it needs to be safe and secure. I think you’re going to see solutions emerge for that. I think you’re going to see some version of last mile shipping, but for coding. Because right now, there’s a lot of vibe coding happening and people talking about vibe coding, and “look what this bot can do for me.” But not all of that code is actually shippable. You have to be able to debug it and you have to be able to ship it. It has to run, it has to work. I think you’ll start to see more and more startups and companies trying to offer solutions to all these little gaps and bottlenecks that exist between the AI agents. Then our jobs will be taken fully. Good times.
Michael Calore: Will, for people like us who are consumers of this technology … We’re not vibe coding like Will.
Lauren Goode: Not yet.
Michael Calore: By and large, we’re encountering agentic AI in some form in our daily lives. As it becomes more prevalent, how can we assess its potential and its risks in a levelheaded way?
Will Knight: First of all, I think you should all be vibe coding. I’m making all manner of garbage. When I’m vibe coding, I very consciously do it on a machine where I don’t mind if it deletes a bunch of files, because it is really unpredictable and will sometimes do that. I don’t really know how you do that for something that you’re going to let loose on your shopping. Do you have a budget that you just let the agent spend? I think that’s a great question. The risk is obviously all going to be put entirely on us, I should imagine.
Michael Calore: Yeah.
Lauren Goode: Yeah. But I think as these agents get better and better at doing tasks for us, we have at least a few responsibilities. One is we do have to disrupt ourselves and our habits and learn how to use new tools because they’re going to take over our lives. Two, we have to figure out where we fit in as humans and retain some of that human agency and still participate, stay in the loop, still participate in our own lives, not just let tech take it over. Three, we have to think really hard about the ethics and the privacy and security of what these AI agents are doing as they take over people’s lives. The promise from all the optimists is that this is going to close the equality gap. Once again, it’s going to be great for everybody. We know almost for certain this is not going to be great for everybody.
Michael Calore: Yeah. That’s why I’m the opposite of the optimists at this point. I think that great caution and great skepticism are things that are given here. We need to make sure that when we adopt these technologies, we do so in a way that is very, very thoughtful and cautious. This is something that feels familiar. Probably the most recent example is the phone, the smartphone. Something that causes friction in relationships, it’s something that you can pay attention to and you can ignore humans. You can use it to interact with other humans, you can use it to not interact with other humans. This force that has really eroded a small part of our humanity over the 15 or 20 years that this has been part of our lives. I can see that happening with AI agents as the things that we used to rely on people to do and the things that used to force us to interact with each other go away. I have the same experience whenever I ride in a Waymo instead of riding in an Uber or taking a taxi. The fact that now there’s no human in the equation, there’s no person to interact with. By doing that over and over again, I am slowly eroding my own sense of what it means to be a human being. I know it sounds really esoteric, but you can chart it over time and you’ll probably find that you’re feeling the same way in your own life. I would encourage people to make that decision about how they want to use AI agents. If they don’t feel like this is something that belongs in their lives, that they should vote with their dollars.
Will Knight: Maybe we also need a renewed open source movement so we’re not just using agents that belong and funnel data to these giant companies. Use open source code and models, and that sort of thing.
Michael Calore: That’s a very good idea. We should get Signal on that. They should start doing that. They should make their own model.
Lauren Goode: I think that sounds great.
Michael Calore: OK, let’s take another break, and we’ll come right back. Will and Lauren, thank you for invigorating conversation. We’re going to put AI agents to the side for a minute and get human once again. Let’s talk through some recommendations. Will, as our guest, please recommendation something to our listeners.
Will Knight: OK, I talked about this earlier, but I want to make this my recommendation. I’m going to hold it up, which is great radio. This book, which is The Evolution of Agency by Michael Tomasello. I found it just fascinating and really revealing about what’s missing with AI when it comes to agency, and the importance of understanding human social interaction, human culture. When it comes to the big picture of intelligence in AGI, which everybody talks about, but they don’t really talk about that so much.
Michael Calore: Nice. Lauren, what’s your recommendation?
Lauren Goode: My recommendation is local news, and in particular Mission Local.
Michael Calore: Yay!
Lauren Goode: Mission Local is a local nonprofit news organization here in San Francisco that covers the Mission neighborhood, but also San Francisco broadly. They do a fantastic job. There’s a lot going on in US cities right now, in particular Los Angeles, and also in San Francisco. There are ICE raids happening around the country and people are taking to the streets to protest them. Mission Local has been doing a great job covering what’s been going on in San Francisco so far. I recommend that you support them and support your local news if you can. Mike, what’s your recommendation?
Michael Calore: I want to recommend an essay in the current issue of Harper’s. It is by the Norwegian writer Karl Ove Knausgaard who, Lauren, I’m sure you’re sick of me talking about. But it’s a fantastic essay. It’s called “The Reenchanted World.” It is Karl Ove reckoning with technology. He tells you about the first time he encountered a computer, which was 40 years ago. Then he just has not paid attention to computers at all and it starts to bother him. There’s a great quote near the top. “To keep somewhat informed about the political situation in the world is a duty, something one has no right to turn away from. Shouldn’t something similar apply to technology, given the immensity of it influence?” I love that quote, because it really gets to the center of what we’re talking about today. In order to engage with the world, you need to understand how these systems work. He goes to a Greek island to visit the writer James Bridle, because their book Ways of Being is a really good introduction to intelligence and artificial intelligence and natural intelligence. They talk a lot about artificial intelligence and how its developed and how the various ways of computing intelligence have shown up in our lives. It’s a really wonderful reported piece about the current state of technology in our lives.
Lauren Goode: Great.
Michael Calore: Yeah.
Lauren Goode: It sounds a lot shorter than his books.
Michael Calore: Yeah. You can read it in under an hour.
Lauren Goode: Allright, sounds good to me.
Michael Calore: Thanks for listening to Uncanny Valley. If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you’d like to get in touch with us with any questions, comments, or show suggestions, write to us at uncannyvalley@WIRED.com. Today’s show was produced by Adriana Tapia. Amar Lal at Macro Sound mixed this episode. Jake Loomis was our New York studio engineer. Meghan Herbst fact-checked this episode. Jordan Bell is our executive producer. Katie Drummond is WIRED’s global editorial director. Chris Bannon is the head of global audio.