Why California’s new AI safety law succeeded where SB 1047 failed

why-california’s-new-ai-safety-law-succeeded-where-sb-1047-failed

California just made history as the first state to require AI safety transparency from the biggest labs in the industry. Governor Newsom signed SB 53 into law this week, mandating that AI giants like OpenAI and Anthropic disclose, and stick to, their safety protocols. The decision is already sparking debate about whether other states will follow suit. 

Adam Billen, vice president of public policy at Encode AI, joined Equity to break down what California’s new AI transparency law actually means — from whistleblower protections to safety incident reporting requirements. He also explains why SB 53 succeeded where SB 1047 failed, what “transparency without liability” looks like in practice, and what’s still on Governor Newsom’s desk, including rules for AI companion chatbots.

Equity is TechCrunch’s flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday.   

Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. 

Theresa Loconsolo is an audio producer at TechCrunch focusing on Equity, the network’s flagship podcast. Before joining TechCrunch in 2022, she was one of 2 producers at a four-station conglomerate where she wrote, recorded, voiced and edited content, and engineered live performances and interviews from guests like lovelytheband. Theresa is based in New Jersey and holds a bachelors degree in Communication from Monmouth University.

You can contact or verify outreach from Theresa by emailing theresa.loconsolo@techcrunch.com.

View Bio

Newsletters

Subscribe for the industry’s biggest tech news

Related

Related Posts

Leave a Reply