How to Build an AI Startup: Go Big, Be Strange, Embrace Probable Doom

how-to-build-an-ai-startup:-go-big,-be-strange,-embrace-probable-doom

Earth, it’s said, is home to more than 10,000 AI startups. They’re more abundant than cheetahs. They outnumber dawn redwoods. The figure is a guess, of course—startups come, startups go. But last year, more than 2,000 of them got their first round of funding. As investors shovel their billions into AI, it’s worth asking: What are all these creatures of the boom doing?

I decided to approach as many recent AI founders as I could. The goal was not to try to pick winners but to see what it’s like, on the ground, to build AI products—how AI tools have changed the nature of their work; how terrifying it is to compete in a crowded field. It all sounded a bit like trying to tap-dance on the roiling surface of the sun. OpenAI rolls out an update, and a flurry of posts on X forecast the slaughter of a hundred startups. Brutal!

Is this a revolution that ends with so many engineers’ singed feet? For sure—they can’t all survive. A startup is an experiment, and most experiments fail. But run thousands of them across the economic landscape and you might just learn what the near future holds.

Navvye Anand is the cofounder of a company called Bindwell. When we got on a video call, he spoke with a half-smile and vaguely posh manner as he told me how he’s developing pesticides using custom AI models. Bindwell’s website once described these models as “insanely fast” and claimed that they could predict, in “mere seconds,” the results of experiments that would have taken days. Hearing Anand explain how he’s bringing the principles of AI drug discovery to crops, it was easy to forget that he’s 19.

Anand grew up in India reading Hacker News with his dad and was building his own large language models halfway through high school. Before he graduated, he, his cofounder (now 18), and two other friends from summer camp published a paper on bioRxiv, about an LLM they’d built to predict a facet of protein behavior. It got scientists buzzing on X. The paper was cited in a well-respected journal. They decided they should try to build a startup, brainstormed, and settled on protein-based pesticides. Then, the fairy tale continues, a wood sprite (sorry, venture capitalist) got in touch on LinkedIn and offered them $750,000 to drop out of high school and college and work on the company full-time. They accepted and got started. The teens knew next to nothing about agribusiness. That was last December.

Five months later, Anand and his cofounder opened their first biological testing lab in the San Francisco Bay Area, then moved to another, where they personally squeeze drops of promising molecules into tiny vials. (A protein-based compound can more precisely target a locust or aphid, goes the theory, and not also take out the humans, earthworms, bees.) I asked him how he’d picked up the skills to work in a wet lab. “I hired a friend,” he said cheerfully. The friend coached him over the summer before heading back to college in the fall. “Now I can do some biochemical assays,” Anand says. “Not like a whole range of assays, but basic, wet-lab validation of our models.”

Huh, I thought. That a few teens had in a handful of months built their own LLMs, learned the biochemistry of pest control, used their models to identify potential molecules, and were now pipetting away in their own lab seemed not shabby. In truth, once I’d tallied up all that they’d done, it struck me as completely absurd. I had expected to hear that AI tools are speeding up parts of building a company, but I had only a vague sense of the scale of their impact. So in my next interview, with the cofounders of a 14-month-old startup called Roundabout Technologies, I got straight to that: Break down what’s changed and by how much.

“We’ve shipped an insane amount of stuff,” Collin Barnwell tells me. His company of four (two of them started last April) is making a real-time vision system for traffic lights, to improve how reds and greens are timed. Considering what must be billions of human hours spent rotting at reds on empty roadways, the case for an AI revolution at intersections seems clear. Barnwell rattles off a list of what they’ve accomplished since April, while jumping from one AI assistant to the next—training vision networks on their own data, using LLMs to deeply research cities, writing the software for their GPU, cranking out various dashboards, engineering the hardware components. “You really feel like these tools are taking you to the bleeding edge,” he says. He considers himself a middling coder and seems delighted by what he’s been able to build. “I’d say we’re at Collin AGI. We’re not quite yet at Sabeek AGI.”

Sabeek Pradhan is his cofounder. “What would have taken a few weeks to build turns into five minutes of waiting for a model to run,” Pradhan tells me. The most time-consuming part of their work, by far, was finding their first human customer. In July, as the company approached its first birthday, the pair worked with the city of San Anselmo, north of San Francisco, to install their system at its busiest intersection. In October they went live at a crossing one street over, with 11 more traffic lights planned.

Almost every founder I spoke with said something similar: A week or two of coding work now fits in a day. One guy told me that building software has become so simple that it’s “not even fun.” Another told me that when the internet goes down, it’s “excruciating” and “really painful” to have to write his own code. The LLM dependence is real. But no one is more yoked to the big models than the people building products fully on top of them: the AI agents crew.

Justin Lee and Linus Talacko were working at a medical startup in Australia when they heard the AI summons. It was time to jump. “We just knew this was the thing that happens once every 20 years,” Lee, who is 21 years old, tells me. The software engineers looked at their lives, saw the monotony in their work routines, and wondered why their repetitive tasks weren’t getting automated. They decided to build an agent, basically an intern bot that they could boss around on Slack. They called their company Den.

Lee and Talacko, who is 22, quickly hit a wall. They’d built their agent to be chatty, but people seemed impatient with the dialog. Instead, Lee says, users seemed to want to treat it “like Python scripts or workflows in Zapier”—less like a back-and-forth conversation with a coworker and more like a button to click. They ditched what they’d built and tried again. The challenge, Lee explains, is that no one knows how people want to interact with AI, so engineers are basically guessing and crossing their fingers. “This is one of the most fast-moving and dynamic times in terms of rate of change in, like, history,” he continues. “Things are being overwritten every month.” As Talacko puts it, the only way to survive is to be eternally flexible. Be ready to toss out your entire code base and start over. “I’m just constantly overriding my assumptions about how the world’s going to look in two weeks,” Lee says. “Because most of the time I’m undershooting.”

One quick thing Talacko built was an agent that monitored when someone registered interest on their website and then googled to see if they were notable. “That’s how we noticed that Ivan Zhao of Notion had signed up to use our product,” Lee says, name-dropping the CEO of a popular productivity tool. “The agent was kind of acting like a sentry, patrolling our database. That’s obviously something you’d never hire a human to do.”

Right, the humans. In this case, it’s not that a job is getting eliminated or not created. It’s more that an AI-powered person can stretch. As Barnwell from the traffic lights company put it, he’s writing code he wouldn’t have otherwise bothered to write. Two years ago, back when coding was arduous, he’d spent most of his time in the weeds with functions and algorithms. Now he can devote more of himself to research, exploration, and the task generally known as thinking. “You’re building disposable software all the time to help you accomplish stuff,” he says.

Code isn’t precious anymore, and that disposability can feel destabilizing. Lee told me how a year ago they spent three months learning and suffering to meticulously code up an agent, and that now all their work “could probably be written in three days, without our touch.” Lee confessed to feeling a bit bleak and finds himself wondering what’s even worth learning these days.

“What is insane about today,” Talacko adds, “is if a customer emails you with a bug or a feature request, you can literally copy and paste what they wrote into Claude Code, and you’ll have a feature in 15 minutes.”

“So,” I asked, “is your brain, just, not involved?”

He let out a huff and paused to think. “I guess, like, the judgment of whether you should build that feature is what’s most important. It’s far less about being able to build stuff and put slop out into the world and more about being tactical and strategic with what you are choosing to build.”

When building systems becomes this easy, you can run countless experiments. Wander down dark alleys. The penalty of trying a different approach has dropped so low that a single startup can parallel-universe itself into any number of versions before collapsing into one smart, sensible form. When, as some like to say, you can write a startup’s worth of code in a weekend, the work of building a tech company is no longer coding, exactly. So what, then, is the work?

“The consequence,” Lee told me, “is that taste becomes the most important thing. Everything else is an expression of or an implementation of that taste.”

I hesitated, feeling an impossible question take over my mind. Then I blurted it out: How do they define taste? I anticipated an uncomfortable silence.

Lee cracked a smile and said, “Actually we have a whole group chat, to define what taste is.”

We spent a few minutes discussing, but a crisp answer eluded us. After the call ended, an email from Lee appeared in my inbox. It contained a set of links to his favorite readings on taste. Some were posts on X. There was a smart write-up on LinkedIn. But his favorite—and mine, easily—was an essay by the godfather of startups, Paul Graham.

Chances are you know who Graham is. Cofounder of Y Combinator. Prolific poster on X and writer of blogs. He also made a cameo in my interview with the pesticide kid, Navvye Anand. (In February, Graham posted on X that he didn’t approve of VCs enticing teens to drop out of school. As Anand recalls it, Graham had met him and his cofounder—by then both dropouts—over tea that day. Graham had relayed his disfavor and then, plot twist, also decided to fund them.)

A casual reader of the Graham oeuvre may have come across “Founder Mode” (2024) or “Do Things That Don’t Scale” (2013). You have to go pretty deep into the canon before you reach “Taste for Makers” (2002).

Taste, Graham writes, is the ability to make beautiful things. Simple as that. A person with taste has both the experience to recognize what’s good and the technical skill to design such things. So what is good design? It is, among other things, simple, timeless, and daring. Then Graham himself gets more daring: Good design is also slightly funny. Good design is often strange.

I thought about those words as I spoke to founder after founder. Several teams of engineers were building home robots, which are having a moment—again because they can now make headway with a lot less money and fewer people than before (plus cheap parts from China). One of those robots, Weave Robotics’ Isaac, is something of a floor lamp on wheels, with chonky arms that end in crab claws. With a dainty basket in one claw, it can wheel from room to room collecting cups and toys, every bit the 18th-century villager picking vegetables at the market. In San Francisco, an Isaac is stationed in a room full of washing machines, where it folds clothes for a laundry startup.

A more imposing home-based helper, K-bot, has a familiar humanoid form but is all black, sleek, masculine. When it drops a slice of bread into a toaster, there’s a whiff of menace to the action—a big hunk of metal has no business making breakfast. Perhaps you’re jaded from a decade of demos that went nowhere, and this sounds normal. But clear the crust from your eyes, because it’s still weird to see a robot soldier serving toast. So too the tireless villager with its quaint basket. This, perhaps, is where “funny” and “strange” come in. They add a zing of shock to the everyday. They catapult you to a new place.

Benjamin Bolte is the guy behind the K-bot; his company is called K-Scale Labs. Back when he was casting about for a startup idea, he concluded that the only sensible plan was to go after the hardest problem he could find. “All of the ideas that made sense a couple of years ago now don’t,” Bolte tells me, referring to the business-y software that dominates tech. “You need to push out your own horizons to even have a chance to not get totally swamped by all the other companies doing these things.” That’s how he landed on cheapish, open source humanoids; imagine marshaling six of them and starting a construction company.

Bolte’s no stranger to an out-there horizon. The true nerds will have figured this out, but for the rest of us: “K-Scale” refers to the Kardashev scale, named after Russian astrophysicist Nikolai Kardashev. In a 1964 paper titled “Transmission of Information by Extraterrestrial Civilizations,” Kardashev proposed a measure of technological advancement, which he reasoned might be helpful in his quest to find aliens. At the bottom of his scale was Type 1, a civilization that can harness all the energy available on its planet. (Some estimates put humans at 0.7.) On X, Bolte’s bio reads, “Moving humanity up the Kardashev scale.” His thinking is that a society-wide network of affordable robots can hoist us to Type 1.

It’s all a bit heady. Or not. Because mere days later I landed in a second conversation about Kardashev. I’d reached out to a company called Starcloud that’s working to put data centers in space (to help run all that AI, naturally). Starcloud started in the summer of 2024 and plans to launch a first GPU into low Earth orbit in November. If data centers are running rivers dry, and annoyed townsfolk already wish to launch them to kingdom come, why not take them up on it? The guy in charge has set his sights on getting humanity to Type 2, defined as harnessing all the energy of a star. No shortage of ambition here.

So, yes, things are moving fast. Precisely how fast? Hard to say. The mind might feel like it’s zooming along, while the clock says something else entirely. As one recent paper out of Cornell found, developers who relied on AI actually ended up being slower than people who wrote code by hand. Time once spent on active coding might now get burned on other tasks. And—it must be said—the odds are long that any of these startups survive to see the light of 2027, no matter how much AI might be helping them through their tasks.

Still, there’s something exponential-feeling about the world these days. Even the more earthbound ambitions of some startups can sound intense in the aggregate—sending AI into Rust Belt factories, into independent grocery stores, into county governments. AI is often called Promethean for its mix of danger and power. So it makes sense that a few driven optimists would aim for the great ball of fire itself, just to see what happens next.

Related Posts

Leave a Reply