WIRED Roundup: The New Fake World of OpenAI’s Social Video App

wired-roundup:-the-new-fake-world-of-openai’s-social-video-app

On this episode of Uncanny Valley, we break down some of the week’s best stories, covering everything from Peter Thiel’s obsession with the Antichrist to the launch of OpenAI’s new Sora 2 video app.

Sam Altman chief executive officer of OpenAI Inc. during a media tour of the Stargate AI data center in Abilene Texas US...

OpenAI CEO Sam AltmanPhoto-Illustration: WIRED Staff; Kyle Grillot; Getty Images

All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Learn more.

In today’s episode, Zoë Schiffer is joined by WIRED’s senior culture editor Manisha Krishnan to run through five of the best stories we published this week—from how federal workers are being told to blame Democrats for the government shutdown to Peter Thiel’s ongoing obsession with the Antichrist. Then, Zoë and Manisha break down the news of OpenAI launching a new social app for AI-generated videos.

Mentioned in this episode:
OpenAI Is Preparing to Launch a Social App for AI-Generated Videos by Zoë Schiffer and Louise Matsakis
Federal Workers Are Being Told to Blame Democrats for the Shutdown by Vittoria Elliott
The Real Stakes, and Real Story, of Peter Thiel’s Antichrist Obsession by Laura Bullard
Tesla Is Urging Drowsy Drivers to Use ‘Full Self-Driving.’ That Could Go Very Wrong by Aarian Marshall
Scientists Made Human Eggs From Skin Cells and Used Them to Form Embryos by Emily Mullin

You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Manisha Krishnan on Bluesky at @manishakrishnan. Write to us at uncannyvalley@wired.com.

How to Listen

You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.

Transcript

Note: This is an automated transcript, which may contain errors.

Zoë Schiffer: Welcome to WIRED’s Uncanny Valley. I’m WIRED’s Director of Business and Industry, Zoë Schiffer. Today on the show, we’re bringing you five stories that you need to know about this week. Including our scoop of how OpenAI just launched a social app dedicated completely to AI-generated videos. I’m joined today by our Senior Culture Editor, Manisha Krishnan. Manisha, welcome to Uncanny Valley.

Manisha Krishnan: Hi Zoë.

Zoë Schiffer: Our first story is about the thing that I feel like our whole newsroom is talking about, possibly the whole country is talking about. It’s the government shutdown. As of Wednesday this week, the US government has officially shut down, at least for now, and many federal workers are being furloughed until the government reopens. Our colleague, Vittoria Elliott learned this week that employees at the Small Business Administration or SBA received a template from HR with language for their out-of-office email, and they were advised to blame Democrats.

Manisha Krishnan: Of course they were. How explicit was this template?

Zoë Schiffer: Well, let me just read a little bit of it to you. So it says, “I am out of office for the foreseeable future because Senate Democrats voted to block a clean federal spending bill, leading to our government shutdown that is preventing the US Small Business Administration from serving America’s 36 million small businesses.” So pretty explicit, I guess is what I would say.

Manisha Krishnan: Yeah, if I got that as an out-of-office, I’d be like, I’m not reading all that. That’s very wordy.

Zoë Schiffer: It’s petty, it’s kind of funny, but it’s also dark. It just feels like yet another example of how explicit the government is. There’s no semblance of even trying to unite the country or the two parties anymore.

Manisha Krishnan: I know federal workers are supposed to be boring. We’re not supposed to be hearing these takes from them. And one thing that Tori pointed out in the story was that this actually could violate the Hatch Act. And that is a law that sets limits on the kinds of political activities that government employees can engage in. And so by forcing them to do this out-of-office, it could actually be forcing them to break the law.

Zoë Schiffer: Yeah, and I just think for a few years now, we’ve heard a lot about the “deep state” like this idea that these federal employees are really political and they’re trying to push their political agenda. Obviously when you actually report on this stuff, you talk to people, they’re like, “We’ve worked here throughout many different administrations. Our job is not to weigh in on political issues. Our job is to do this stack of boring paperwork or whatever.” And that seems like it’s starting to change. They’re being asked to almost take aside. So employees at other agencies have received more standard out-of-office directives. But also a lot of these agencies are still using this kind of spicy political language themselves. There’s pop-up banners on their websites blaming the Democrats. The Department of Housing and Urban Development has one that says, “The radical left in Congress shut down the government. The agency will be using available resources to help Americans in need.” And the DOJ website has a pretty similar banner.

Manisha Krishnan: Yeah, to see that from the Department of Justice, I think it just kind of shows you where we are right now. Because these messages, they’re insane, but they’re becoming so normalized. And I feel bad for the federal workers. I mean, you would know better than me because you guys did that brilliant feature speaking to federal workers who are impacted by DOJ. They’re losing their jobs, they’re being forced to put out these super politicized messages. It’s just so … I mean, uncomfortable doesn’t even begin to kind of describe it.

Zoë Schiffer: I think one thing that they talk about a lot is just how quickly the norms of government have changed. There was a standard way of operating previously. It didn’t always happen. Like there were people here and there who flouted the norms. But for the most part, it was like if you were a federal worker, you weren’t supposed to be super outspoken. You weren’t supposed to be wildly political in public, necessarily, as part of your job. And those things have changed really fast. And I think even the people who’ve kept their jobs feel like something real has been lost in the process.

Manisha Krishnan: I’m not normally one to cape for decorum in etiquette. But I mean in this situation, I really wish that some sense of normalcy would return.

Zoë Schiffer: Yeah. Our next story is kind of a doozy, and I am specifically very curious to get your take on this. WIRED contributor, Laura Bullard wrote a deep dive on Peter Thiel’s obsession with the Antichrist. So for a few years now, Thiel has been going around giving lectures about his doomsday theory of how we’re all way too concerned with technology. And as a result, we’ve failed to pay attention to the far greater threat, which is the coming of the Antichrist. This figure will allegedly unify humanity under one rule before delivering us to the apocalypse. So I am very curious to hear what you think.

Manisha Krishnan: I mean, I certainly think it’s convenient that he’s trying to be like, oh yeah, don’t worry about technology. The apocalypse is coming, that’s what you should be worried about. It reminds me of that. I think you should leave meme where he’s like in the hot dog and trying to pretend he didn’t cause the car crash. I mean, I guess each of us are free to have our own apocalypse theories. I kind of feel like we might be in it right now. But the difference is that Thiel is one of the most powerful men in the world as the co-founder of PayPal and Palantir and an active Republican Party supporter.

Zoë Schiffer: Yeah. Just to give you a little bit more backstory, so Laura traced Thiel’s belief to his relationship with an Austrian theologist, who in turn was influenced by the writings legal theorist Carl Schmitt. Who himself was tapped by the Nazis back in the forties to justify Germany’s slip from democracy to dictatorship. Schmitt believed as Thiel seems to, that the Antichrist’s evil is an attempt to unify the world. And this, although it sounds positive, is actually quite negative in their view. This unity would lead us to be brainwashed and eventually it would be our downfall. Now, the irony of the co-founder of a data and surveillance technology company having these societal control fears has not gone unnoticed. Thiel recently had a conversation with the New York Times journalist, Ross Douthat, and there was a very awkward moment where this was brought up.

Ross Douthat [Archival audio]: My very specific question for you is that you’re an investor in AI, you’re deeply invested in Palantir, in military technology, in technologies of surveillance, in technologies of warfare and so on. And it just seems to me that when you tell me a story about the Antichrist coming to power and using the fear of technological change to sort of impose order on the world, I feel like that Antichrist would maybe be using the tools that you were building, right? Wouldn’t the Antichrist be like, “Great, we’re not going to have any more technological progress. But I really like what Palantir has done so far.” Right? I mean, isn’t that a concern? Wouldn’t that be the irony of history would be that the man publicly worrying about the Antichrist accidentally hastens his or her arrival?

Peter Thiel [Archival audio]: Look, there are all these different scenario … I obviously don’t think that that’s what I’m doing.

Ross Douthat [Archival audio]: I mean, to be clear, I don’t think that’s what you’re doing either.

Manisha Krishnan: Oh.

Zoë Schiffer: Oh.

Manisha Krishnan: I wish that Ross didn’t chime in at the end there where he was like, “I don’t think that’s what you’re doing.” Just let him be awkward because he was stumbling. He obviously felt very uncomfortable with that. But you have to wonder who is he worried that is going to be uniting this extremely rich, powerful figure? Is he just worried that normal people might try to take back some power? And I think that this does raise a really interesting point about our level of media literacy right now, and all the slop that we’re being fed, and how … I mean, I’ve seen it in my own family. My dad, just to be honest with you, like the things that he believes that are real now. And so, yeah, I think there is absolutely a danger to all of the slop that we’re putting out there, combined with a lack of education.

Zoë Schiffer: Right. Yeah, no, I completely agree. I mean, I had two thoughts. I listened to this podcast when it came out because I have a brain disease that makes me really like this type of content. And one of my takeaways was just having reported on Elon Musk for years and years and years, I was really struck by how … when you listen to Elon, you can tell that even if you disagree with every single thing that comes out of that man’s mouth, he does seem to believe what he is saying, no matter how nuts it sounds. I did not have that feeling with Peter Thiel. I was like, “I can’t tell if this is a genuine belief.” I also just on a literal note was like, “Is this whole thing a metaphor or were you like a literal person is going to come?” I just was so confused. And Ross was taking it very literally, very seriously.

Manisha Krishnan: I know I haven’t heard that much Antichrist talk since Catholic school. And even then I was mostly disassociating. Something else that Laura’s reporting brought up was this idea of the counter figure of the Antichrist, the catacomb. And it’s supposedly a figure that can withhold the Antichrist and the end times. And when Thiel was asked if Trump was the catacomb, he refused to answer. Who would be your personal catacomb, Zoë?

Zoë Schiffer: I really struggled with this because I don’t know, that’s a lot to put on one person right now. It’s hard to think about someone who could unite the country, much less the world at this point. I did think of two different things. One was when the astronomer CEO was caught on a kiss cam at a Coldplay concert, snuggling with his Head of HR. I felt like that moment really did bring the country together. Everyone was kind of united and laughing and interested, and there wasn’t a lot of different takes. It was just kind of like everyone was on the internet watching this thing play out. The other thing, I don’t actually watch South Park, but I was just listening to a podcast with Wesley Morris last night about it. And it does sound like the South Park audience seems to kind of cross political lines, although the show itself is extremely political.

Manisha Krishnan: Yeah, this season has … I mean, this is the first time I’ve watched in 15 years. But this season has been pretty spot on. It’s almost like they’re reading WIRED and satirizing everything. I feel like if Trump and Eric Adams started a talk show, maybe that could be the catacomb.

Zoë Schiffer: I like that.

Manisha Krishnan: Because everyone would find it funny across political lines.

Zoë Schiffer: So maybe South Park will save us after all. Switching gears literally for our next story. Our colleague Aarian Marshall reported that Tesla has been encouraging drowsy drivers to use the full self-driving or FSD mode on their cars. Contrary to its name, this feature does not actually allow cars to drive themselves, it just assists drivers in doing a variety of basic tasks. The manual for the car says that the driver needs to be ready to take over at all times. But drivers are reporting that in-car messages are appearing to tell them to do just the opposite. The messages say things like “Drowsiness detected, stay focused with FSD.” Or, “Lane drift detected, let FSD assist you so that you can stay focused.”

Manisha Krishnan: Yeah, that sounds dangerous. It sounds like they’re kind of like, “Hey, you want to take a nap right now? Let FSD kick in.” No, they should be blasting music, blasting the AC, make it like a spin class in there to wake you up. Tesla has made changes to its technology to make it more difficult for inattentive drivers to use FSD. Back in 2021, the company started using in-car driving monitor cameras to determine where their drivers were sufficiently paying attention while using FSD.

Zoë Schiffer: It seems at odds with their past efforts to build more safety around their self-driving features. This is like a pretty delicate time for Tesla. For years, the company has been accused of making products that can be allegedly defective in certain ways. This past August, a Florida jury found that the company was partly liable for a 2019 crash that killed a 22-year-old woman. The crash occurred when a Tesla model S driver was using an older version of the company’s driver assistant software called Autopilot. At the same time, Elon Musk and the company’s board of directors have put FSD at the center of the automakers strategy. So Musk has promised that the feature will transform into a truly autonomous driving system by the end of the year, although that’s looking unlikely. And Elon Musk is generally known for promising pretty aggressive timelines that then he blows pass again and again. One more before we go to break. WIRED Science reporter, Emily Mullin reported this week that scientists made human eggs from skin cells, and use them to form embryos. This is a huge deal because it could mean a new way to treat infertility for people who want kids. But to be clear, none of the embryos were actually used to try and establish pregnancy. And it’s unlikely that they would’ve developed much further than the womb. But it’s still a really big deal because it could one day be used as an alternative to IVF.

Manisha Krishnan: I mean, that’s pretty incredible, but how would it work exactly?

Zoë Schiffer: Well, one of the reasons IVF can fail is because of poor egg quality. I feel like you and I probably both know people who’ve been in this scenario, who’ve tried to go through egg retrieval and just couldn’t get viable eggs. Because the quality of eggs declines with age, and it’s obviously a major factor in infertility. But if IVF patients could have a ready supply of eggs generated in a lab from a sample of their skin, it could vastly improve the success of IVF, and allow more people to have babies. But this potential new method also raises significant ethical questions about how the technology should be used. In a 2017 editorial on the Journal Science Transnational Medicine bioethicists warned that it might raise embryo farming to a scale that is currently unimaginable.

Manisha Krishnan: I mean, I think on one hand, you’re absolutely right, I do have friends who’ve struggled with infertility, and so it is really fascinating to see what new options could on the table. But you also do have to wonder, we’re in this time right now where especially among the ultra wealthy, there seems to be a lot going on in terms of fertility and trying to have the best genes in your children. And I just kind of wonder how this could play into that, how this could be potentially used unethically and who might be hurt by that or even exploited by that.

Zoë Schiffer: Yeah, I think it’s a really good question. I mean, this is a big thing like you mentioned, the Silicon Valley Elite, in addition to being kind of obsessed with longevity, are pretty into some of these new methods for selecting embryos or testing them in different ways. I think Sam Altman has actually invested in a startup that is doing something similar. In certain ways, I think all of the embryo selection stuff, there’s a certain degree that I totally get about it. And then when you take it too far, I personally feel like it’s just a misunderstanding of what creates a good life. There’s no guarantee that if you select the embryo that is the chart topper on all of these specific tests, that it’ll necessarily be happier or better off than one that ranks a little lower.

Manisha Krishnan: Wow. That was very deep and philosophical, and I agree with you, though.

Zoë Schiffer: Coming up after the break, OpenAI just launched a social app for AI-generated videos. We’ll talk more about that scoop in a minute. Stay with us. Welcome back to Uncanny Valley. I’m Zoë Schiffer. I’m joined today by Senior Culture Editor, Manisha Krishnan. Let’s dive into our main story. So earlier this week, my colleague Louise Matsakis and I learned that OpenAI was preparing to launch a social app for AI-generated videos. This app has now come out a day after we broke the story, it actually launched. It’s a standalone app that uses the video generation model Sora 2. I’ve been using the app a little bit since it came out, and it has some pretty familiar features, like a vertical video feed with swipe to scroll navigation, and a tab that has a for you recommendation page. So it’s basically TikTok, except everything you’re looking at is generated by AI.

Manisha Krishnan: Okay. First of all, congrats on this scoop. This story is so crazy. I just kind of want to know how you felt using it. Because I saw some of the videos that you were posting in Slack, and honestly I was like, is that Zoë?

Zoë Schiffer: So a couple things. Basically, you open the app and it really encourages you to create content because it doesn’t want people just mindlessly scrolling. So if you press this plus button, it asks you to verify your identity. And you do that by filming a video of yourself, essentially, reading out these numbers on the screen. Which allows the app to generate your likeness, like a digital avatar of your face and body, and also your voice. And then it’s pretty easy. You just put in a prompt, it kind of has the ChatGPT issue where you’re creating a good prompt is just actually harder than you would think.

Manisha Krishnan: Okay. So obviously there’s a novelty to being able to sort of create your own likeness and make it do weird things. But do you think it could sustain your interest in the same way that TikTok does? Because I don’t know. One of the things with social media, which our culture writer Jason Parham was talking about, we kind of built our followings by leaning into being authentic and having a personality. And this just seems like such a radical departure from that because you’re not even using your own content.

Zoë Schiffer: Yeah, it’s weird. I mean, it unlocked something because Meta tried to roll out AI generated video app recently, and it’s a complete ghost town. It was a total flop. I actually think that there’s reporting that the company might have rushed up the release because they heard that OpenAI was working on the Sora 2 app. But regardless, people weren’t really using it. I think one thing that OpenAI did really well is that it put you and your friends at kind of the center. And there is a real novelty to watching you and your friends in AI generated invented scenarios. And you can make it look realistic, you can make you and all your friends like mermaids swimming in the sea. And there’s something funny and interesting about it. That said, when you look at the actual feed right now, at least my feed is mostly OpenAI employees. And the main character of the app completely is OpenAI CEO Sam Altman. He’s in like every other video.

Manisha Krishnan: Yeah, I don’t think anyone really wants a CEO to be a main character. Are there guardrails in place for … I mean, obviously I’m thinking of deep fake porn and just things like that. What’s the deal with that?

Zoë Schiffer: Yeah, so it does have guardrails in place for that. You can’t generate a nude image or video of someone. I think if you even try and prompt explicitly romantic scenarios, it could block that. You also can’t add someone else to a video. I couldn’t create a video with Sam Altman unless he had set his settings to say, anyone can generate my likeness. My personal settings are set so that only I can generate my likeness because I just feel like that’s a really scary genie to let out of a bottle. But you do get a bit of control. Interestingly, copyright seems like it’s a bit of an open question for the app. Reese tried to generate a video of Taylor Swift or Darth Vader, and the app blocks you saying that it violates the platform’s rules. But then people were able to generate videos with Pikachu in them or other kind of famous copyrighted characters.

Manisha Krishnan: I wonder if they base it on who they think is going to be most litigious.

Zoë Schiffer: I don’t know. I’m so curious how they came up with these rules. But I think we’ll keep reporting on it because it does seem like the copyright concerns are obviously going to be front and center. And also people are just going to be trying to like … for example, I saw some people trying to create videos with different people promoting crypto coins and stuff. That seems like it’s currently … you’re able to at least get around certain filters. So I feel like OpenAI is going to be playing Whack-a-Mole a little bit for the next few weeks as it sees different ways that people are trying to get around the current rules and generate videos that could be problematic.

Manisha Krishnan: I’m curious how it’s going to end up competing with TikTok and also will it change behavior? TikTok has literally changed how people act in public. And so I’m kind of curious your take on what do you think they’re trying to do with this, essentially?

Zoë Schiffer: That’s a really good question. I mean, I feel like a bunch of AI firms are looking at AI entertainment as kind of a future of social media. Interestingly, TikTok has really leaned away from AI generated content. It doesn’t seem like it’s something the platform is trying to invest in significantly at all. And so I think, again, OpenAI took a step forward by saying, oh, yeah, allowing people to put themselves and their friends in these scenarios, in these videos, could be somewhat of an unlock. But it’s very much TBD whether this will actually take off.

Manisha Krishnan: Well, I will be following your recording. And I want to try it myself, I’m not going to lie. I do.

Zoë Schiffer: No, you should. You should. I know. Okay, Manisha, thank you so much for coming on today.

Manisha Krishnan: Thanks for having me.

Zoë Schiffer: That’s our show for today. We’ll link to all the stories we spoke about in the show notes. Make sure to check out Thursday’s episode of Uncanny Valley, which is about how delivery apps like DoorDash are betting on robots being the future of delivery. Adriana Tapia and Mark Lyda produced this episode, Amar Lal at Macro Sound mixed this episode, Pran Bandi is our New York Studio Engineer, Kate Osborn is our Executive Producer, Condé Nast Head of Global Audio is Chris Bannon and Katie Drummond is WIRED’s Global Editorial Director.

Related Posts

Leave a Reply