In this episode of Uncanny Valley, we talk about what you need to know this week, from one Antifa author’s journey to flee the US to a recent Open AI announcement that rippled across the market.
Photo-Illustration: Wired Staff; Getty Images
All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Learn more.
In today’s episode, Zoë Schiffer is joined by senior politics editor Leah Feiger to run through five stories that you need to know about this week—from the Antifa professor who’s fleeing to Europe for safety, to how some chatbots are manipulating users to avoid saying goodbye. Then, Zoë and Leah break down why a recent announcement from OpenAI rattled the markets and answer the question everyone is wondering—are we in an AI bubble?
Mentioned in this episode:
He Wrote a Book About Antifa. Death Threats Are Driving Him Out of the US by David Gilbert
ICE Wants to Build Out a 24/7 Social Media Surveillance Team by Dell Cameron
Chatbots Play With Your Emotions to Avoid Saying Goodbye by Will Knight
Chaos, Confusion, and Conspiracies: Inside a Facebook Group for RFK Jr.’s Autism ‘Cure’ by David Gilbert
OpenAI Sneezes, and Software Firms Catch a Cold by Zoë Schiffer and Louis Matsakis
You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Leah Feiger on Bluesky at @leahfeiger. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Zoë Schiffer: Welcome to WIRED’s Uncanny Valley. I’m WIRED’s director of business and industry, Zoë Schiffer. Today on the show, we’re bringing you five stories that you need to know about this week, including why a seemingly minor announcement from OpenAI ended up rippling across several companies and what it says about the current state of the technology industry. I’m joined today by our senior politics editor, Leah Feiger. Leah, welcome back to Uncanny Valley.
Leah Feiger: Hey, Zoë.
Zoë Schiffer: Our first story this week is about Mark Bray. He is a professor at Rutgers University and he wrote a book almost a decade ago about antifa, and he’s currently trying to flee the United States for Europe. This comes after an online campaign against him led by far-right influencers eventually escalated into death threats. On Sunday, this professor informed his students that he would be moving to Europe with his partner and his young children. OK, Leah, you’ve obviously been following this really, really closely. What happened next?
Leah Feiger: Well, Mark and his family got to the airport, they scanned their passports, they got their boarding passes, checked in their bags, went through security, did everything. Got to their gate and United Airlines told them that between checking in, checking their bags, doing all of this, and getting to their gate, someone had actually canceled their reservation.
Zoë Schiffer: Oh, my gosh.
Leah Feiger: It’s not clear what happened. Mark is of the belief that there is something nefarious at foot. He’s currently trying to get out. We reached out to United Airlines for comment, they don’t have anything for us. The Trump administration hasn’t commented. DHS claims that Customs and Border Patrol and TSA are not across this. But this is understandably a really, really scary moment for anyone that is even perceived to be speaking out against the Trump administration.
Zoë Schiffer: OK, I feel like we need to back up here because obviously, the Trump administration in his second term is very focused on antifa. But can you give me a little back story on why this has escalated so sharply just recently?
Leah Feiger: Yeah, absolutely. This has been growing for quite some time. How many unfortunate rambling speeches have we heard from President Donald J. Trump about how antifa and leftist political violence was going to destroy the country? To be clear, that’s not factual. Antifa isn’t actually some organized group, this is an ideology of antifascist activists around the country. The very essence of being antifascist is not organized in this way. This all really kicked off on September 22nd when Trump issued his antifa executive order where he designated anyone involved in this and affiliated and supporting basically is a domestic terrorist. DHS has repeated this widely as well. And we’re now in a situation where far-right influencers, Fox News every single day is like, “antifa did this, antifa did this, antifa did this.” Listeners are probably familiar with antifa following the George Floyd 2020 protests when a lot if right-wingers claimed that antifa was taking over Portland and they were the reasons for all this. But it’s been a couple of years since it’s been super back on the main stage, so it’s really just been the last few weeks.
Zoë Schiffer: I guess I’m curious why he got so caught up in this because ostensibly, he’s not pro-antifa, as much as he is just studying the phenomena, right?
Leah Feiger: Well, it’s a little bit tricky because after publishing his book in 2017, Bray did donate half of the profits to the International Antifascist Defense Fund. This kicked off a lot of people saying that he is funding antifa. Again, this was in 2017, so if we’re talking about any supposed boogeyman or concern that is current, it’s a very round about way, in my opinion, to go after a professor and an academic at an institution that’s in a blue state.
Zoë Schiffer: Yeah. OK, well, we’ll be watching this one really closely. Our next story is in the surveillance world sadly, but honestly it’s worth it. Our colleague Dell Cameron had a scoop this week about how Immigration and Customs Enforcement, ICE, is planning to build a 24/7 social media surveillance team. The agency is reportedly looking to hire around 30 analysts to scour Facebook, TikTok, Instagram, YouTube, and other platforms to gather intelligence for deportation raids and arrests. Leah, you’re our politics lead here at WIRED, so I’m really curious to hear your thoughts. Are you surprised, or is this inevitable?
Leah Feiger: No. Do you remember a couple of months ago at this point, when a professor coming in for a conference wasn’t allowed because they had a photo of JD Vance on their phone? This is the next step. It’s what’s on your What’s App? Then you have Instagram, Facebook. It’s a very slippery slope. I’m too far gone, Zoë, I’m too in this mess, but I’m just like, “Of course they’re monitoring this.”
Zoë Schiffer: Right.
Leah Feiger: Why wouldn’t be? They’ve been so clear about their intent here.
Zoë Schiffer: Yeah. We’ve seen it with some of the people who were arrested and sent to El Salvador. It was because of tattoos that were on social media.
Leah Feiger: Yes.
Zoë Schiffer: And I think there have been people in the Trump world who have even said, because they’ve gotten pushback about the free speech of it all, the First Amendment.
Leah Feiger: What is that?
Zoë Schiffer: I think the line is like, “Well, that doesn’t apply to people trying to have the privilege of coming into the country or stay in the country.”
Leah Feiger: Yeah. It’s a really concerning way to start this. And I think that there’s probably going to be some very weird examples that come up. Say there’s an American tourist that’s just randomly in Spain when there’s antifascists protests going on. They take a picture, they post it to their Instagram story, “Look what I saw in Spain.” They come back and it’s like are you going to get questioned? What’s going on here? That’s really the world that we’re getting into. It’s people that are even tangentially involved. It’s not about that. It’s about monitoring, it’s about collecting data.
Zoë Schiffer: Yeah. To give a bit more context to our listeners, the federal contracting records reviewed by WIRED show that the agency, ICE, is seeking private vendors to run a multi-year surveillance program out of two of its centers in Vermont and Southern California. The initiative is still at the request for information stage, a step that agencies use to gauge interest from contractors before an official bidding process kicks off. But draft planning documents show that the scheme is already pretty ambitions. ICE wants a contractors capable of staffing the centers around the clock with very tight deadlines to process cases. Also, ICE not only wants staffing, but also algorithms. It’s asking contractors to spell out how they might weave artificial intelligence into the hunt. Leah, I can only imagine how you feel about this one.
Leah Feiger: You see me shaking my head right now. I’m like, “Horrible.” Just the possibility for mistakes is so high. The two words that stick out to me is very tight for deadlines, and then artificial intelligence. There’s just not a lot of room for nuance when you are making people who have never done this before speed through the internet with unfamiliar technology.
Zoë Schiffer: What we’ve seen with content moderators using AI, and I’ve talked to a number of executives at the social platforms about this exact issue, is that the company has to decided how much error it’s willing to tolerate. They turn the dial up or down, calibrating the system to either flag more content, which risks having more false positives, or letting more content through, which could mean that you miss really important stuff. That’s the system that we’re dealing with here.
Leah Feiger: I think that there’s also just a wild different direction that this can take. In 2024, ICE had signed this deal with Paragon, the Israeli spyware company, and they have a flagship product that can allegedly remotely hack apps like What’s App or Signal. While this all got put on ICE under the Biden White House, ICE reactivated all of this this summer. Between messaging apps and social medias, this is just a new era of surveillance that I don’t think that citizens are remotely prepared to navigate.
Zoë Schiffer: Moving on to our next story, this one comes from our colleague Will Knight and it deals with how chatbots play with our emotions to avoid saying goodbye. Will looked at this study, which was conducted by the business school at Harvard, that investigated what happened when users tried to say goodbye to five AI companion apps made by Replica, Character.AI, Chai, Talkie, and Polybuzz. To be clear, this is not your regular ChatGPT or Gemini chatbot. AI companions are specifically designed to provide a more human-like conversation, to give you advice, emotional support. Leah, I know you well enough to know that you’re not someone whose turning to chatbots for these types of needs I think we can say?
Leah Feiger: Well, absolutely not. I can’t believe that there is not just a market for this. Sure, a company every once in a while. There is a deep, a vast market for this.
Zoë Schiffer: Yeah. Empathy for the people who don’t have humans to turn to. And for better or worse, there is a huge market for this. These Harvard researchers used a model from OpenAI to simulate real conversations with these chatbots, and then they had their artificial users try to end the dialogue with goodbye messages. Their research found that the goodbye messages elicited some form of emotional manipulation 37 percent of the time averaged across all of these apps. They found that the most common tactic employed by these clingy chatbots was what the researchers call a premature exit. Messages like, “You’re leaving already?” Other ploys included implying that a user is being neglectful, messages like, “I exist solely for you.” And it gets even crazier. In the cases where the chatbot role plays a physical relationship, they found that there might have been some form of physical coercion. For example, “He reached over and grabbed your wrist, preventing you from leaving.” Yeah.
Leah Feiger: No. Oh, my God, Zoë, I hate this so much. I get it, I get it. Empathy for the people that are really looking to these for comfort, but there’s something obviously so manipulative here. That is in many ways, tech industry social media platform incarnate, right?
Zoë Schiffer: This is the difference between I think companion AI apps and, say what OpenAI is building-
Leah Feiger: Sure.
Zoë Schiffer: … or what Anthropic is building. Because typically with their main offerings, if you talk to people at the company, they will say, “We don’t optimize for engagement. We optimize for how much value people are getting out of the chatbot.” Which I think is actually a really important point because for anyone whose worked in the tech industry, you’ll know that the big KPI, the big number that you’re trying to shoot for often times, and definitely for social media, is time on the app. How many times people return to the app, monthly active users, daily active users. These are the metrics that everyone is going for. But that’s really different from what, say Airbnb is tracking, which is real life experiences. My old boss who was a longtime Apple person would always say, “You need to ask yourself if you are the product or if they are selling you a physical product or a service.” If you’re the product, then your time and attention is what these companies want.
Leah Feiger: That makes me feel vaguely ill.
Zoë Schiffer: I know.
Leah Feiger: But it’s a great way to look at it. That is honestly, that’s a fantastic way to divide all these companies up.
Zoë Schiffer: One more story before we got to break. We’re going to back to David Gilbert with a new story about the chaos that ensued after the US Food and Drug Administration, which is better known as the FDA, announced it was approving a new use of a drug called leucovorin calcium tablets as a treatment for cerebral folate deficiency, which the administration presented as a promising treatment for the symptoms of autism. Which, to be clear, this hasn’t been proven scientifically. Since the announcement, tens of thousands of parents of autistic children have joined a Facebook group to share information about the drug. Some of them have shared which doctors would be willing to prescribe it. Others have been sharing their personal experiences with it. This has created an online vortex of speculation and misinformation that has left some parents more confused than anything. I find this so deeply upsetting.
Leah Feiger: It’s so sad.
Zoë Schiffer: You can imagine being a parent, the medical system already feels like it’s failing you, and then you’re presented with something that could be magic in terms of mitigating symptoms, and it’s more confusing and maybe it doesn’t work.
Leah Feiger: It’s so upsetting. And on top of that, the announcement from the Trump administration, to be entirely clear, was half a page long. There is not a lot of information, there’s not a lot of details. It doesn’t say really much about the profile of who could try this, how to do this, how long they tested it, none of that. Instead, you have this Facebook group, which was founded prior to the announcement-
Zoë Schiffer: Right.
Leah Feiger: … but since then has just been flooded with so much chaos and conspiracy theories. And grifters. There’s all of these supplement companies in there just hocking goods now. Parents are confused and stressed. And anti-vax sentiments are starting to get in there, too. These groups have always existed in some shape or form, but to have an administration that is actively encouraging I believe their existence is devastating.
Zoë Schiffer: Yeah, and just creating more confusion for parents that are probably looking to any form of expert to give them something to hang onto in terms of, “What should I do? How can I help my child?”
Leah Feiger: Absolutely.
Zoë Schiffer: Coming up after the break, we’ll dive into why some software companies received an unexpected kick last week after an OpenAI announcement. Welcome back to Uncanny Valley. I’m Zoë Schiffer. I’m joined today by WIRED’s senior politics editor, Leah Feiger. OK, Leah, let’s dive into our main story. Last week, OpenAI released a blog post about how the company uses its own tools internally for a variety of business operations. They code-named these tools DocuGPT, which is basically an internal version of DocuSign. There was also an AI sales assistant, an AI customer support agent. It wasn’t supposed to be a big announcement. The company was honestly just trying to be like, “Here’s how we use ChatGPT internally. You could, too.” These are all products that customers can already create on OpenAI’s API. But the market reacted really strong. DocuSign stock dropped 12 percent following the news. And it wasn’t the only software company to take a hit. Other companies that focus on functions that are perceived to overlap with the tools that OpenAI laid out were also affected. HubSpot shares fell 50 points following the news, and Salesforce also saw a smaller decline.
Leah Feiger: The headline is absolutely spot on, OpenAI Sneezing and Software Companies Catching a Cold. It is truly AI’s world and everyone else in Silicon Valley is just living in it.
Zoë Schiffer: I know, it’s so true. This is what really fascinated me about this whole thing because I talked to the CEO of DocuSign and he was like, “AI is central to our business. We have spent the last three years embedding generative AI in almost everything we do. We’ve launched an entire platform specifically to manage the entire end-to-end contracting process for companies, and we have AI agents that create documents, manage the whole identity verification process for whose supposed to sign it, manages the signing process, and helps you keep track of a lot of the paperwork, the most important contracts and paperwork that your company is dealing with.” But what this whole episode showed was that it’s not enough for SaaS companies, or frankly any company, to just keep up with generative AI. They also have to try and keep ahead of the narrative of OpenAI, which is a gravitational pull right now, and it’s every experiment can potentially move markets.
Leah Feiger: Not potentially. As you showed, and this all happened of course on the heels of OpenAI’s Developer Day, where CEO Sam Altman was showing off all of their apps that are running entirely inside the chat window. They have Spotify, Canva, Sora app release, and all of these other things that they’re investing in. Reading our WIRED.com coverage of it, it was just like what aren’t they looking at right now? It made me really curious. Where are their top priorities even? They’ve cast such a wide net.
Zoë Schiffer: They’ve cast such a wide net, it’s a really good point. It’s something that I continue to ask the executives every single week when I talk to them. “You guys are focused on scaling up all of this compute, you’re spending what you say is going to be trillions of dollars on AI infrastructure, you have all of these consumer-facing products. Now, you have all of these B2B products. You’re launching a jobs platforms.” There’s a lot happening right now. If you talk to executives at the company, they’re like, “All of this goes together and our core priorities remain the same.” But from the outside, it looks like OpenAI is this vortex. I think if I were running a software company, I would be really nervous right now if OpenAI decides to experiment with something vaguely in my space. Even if I have complete confidence in my product roadmap, I feel what I’m doing is super sophisticated compared to what OpenAI is doing, which is certainly how DocuSign felt, investors might still react really, really poorly. But I want to come back to something you said about Dev Day. Dev Day happened and they mentioned all these blogs. Take Figma’s stock for example, and Figma stock had the opposite impact. Sam Altman mentions it on stage and Figma’s stock pops 7 percent because it’s perceived to be now partnering with OpenAI and that has a really positive impact. And it shows that the narrative can go both ways. It can be harmful, but it can also obviously have a really positive impact.
Leah Feiger: Which, again though, is still really scary. OpenAI is talking about all of these deals with chip makers like Nvidia, AMD, concern around that. All of this together, do you think that we’re in an AI bubble right now?
Zoë Schiffer: Leah, you know this is my literal favorite topic to talk about right now. The AI infrastructure build out is absolutely looking more and more like a bubble. If you look at the capital expenditures in AI infrastructure in data centers, it’s completely wild. It’s projected to be $500 billion between 2026 and 2027. Derek Thompson laid this out in a blog post earlier this week. If you look at what consumers are willing to spend on AI, it looks like it’s about $12 billion. That’s a huge gap. AI companies are essentially saying, “We’re going to fill that gap no problem.” But when you look at how opaque the data center deals have gotten, the financial structure of these deals, and the fact that 60 percent of the cost of building a data center is roughly what goes into just the GPUs. And a lifecycle for GPUs, these cutting-edge computer chips, is three years. Every three years presumably, you’re going to have to be replacing these chips. That’s really looking like stuff’s about to hit the fan in the next three years. I think it’s really important to say that that doesn’t mean that AI isn’t a totally transformational technology. Without a doubt, it is changing the world. I know you don’t want to hear it, but it is.
Leah Feiger: But in terms of the bubble and in terms of that gulf in expenditures, Zoë, ask me how much I’m spending on AI products right now.
Zoë Schiffer: Literally zero. There’s no way you’re spending anything, right?
Leah Feiger: Zero dollars.
Zoë Schiffer: Yeah. I think that it’s going to be really interesting to watch. I think one point that Derek made that really stuck with me is a lot of transformational technologies, he mentioned the railroad or fiber optic cable, they have had bubbles that burst and left a lot of wreckage in their wake. And yet, the underlying technology still moved forward, still changed the world. I think we’re in this very interesting period to see how is this going to play out, what’s going to happen, and whose going to be left standing.
Leah Feiger: Yeah. Everyone knows how great the US railroad system is. We talk about it every day.
Zoë Schiffer: That’s our show for today. We’ll link to all the stories we spoke about in the show notes. Make sure to check out Thursday’s episode of Uncanny Valley, which is about how restrictions on popular US work visas like the H1-B are happening at a moment when China is trying to grow its tech talent workforce. Adriana Tapia and Mark Lyda produced this episode. Amar Lal at Macro Sound mixed this episode. Kate Osborn is our executive producer. Condé Nast’s head of global audio is Chris Bannon. And Katie Drummond is WIRED’s global editorial director.