Vibe Coding Is the New Open Source—in the Worst Way Possible

vibe-coding-is-the-new-open-source—in-the-worst-way-possible

Just like you probably don’t grow and grind wheat to make flour for your bread, most software developers don’t write every line of code in a new project from scratch. Doing so would be extremely slow and could create more security issues than it solves. So developers draw on existing libraries—often open source projects—to get various basic software components in place.

While this approach is efficient, it can create exposure and lack of visibility into software. Increasingly, however, the rise of vibe coding is being used in a similar way, allowing developers to quickly spin up code that they can simply adapt rather than writing from scratch. Security researchers warn, though, that this new genre of plug-and-play code is making software-supply-chain security even more complicated—and dangerous.

“We’re hitting the point right now where AI is about to lose its grace period on security,” says Alex Zenla, chief technology officer of the cloud security firm Edera. “And AI is its own worst enemy in terms of generating code that’s insecure. If AI is being trained in part on old, vulnerable, or low-quality software that’s available out there, then all the vulnerabilities that have existed can reoccur and be introduced again, not to mention new issues.”

In addition to sucking up potentially insecure training data, the reality of vibe coding is that it produces a rough draft of code that may not fully take into account all of the specific context and considerations around a given product or service. In other words, even if a company trains a local model on a project’s source code and a natural language description of goals, the production process is still relying on human reviewers’ ability to spot any and every possible flaw or incongruity in code originally generated by AI.

“Engineering groups need to think about the development lifecycle in the era of vibe coding,” says Eran Kinsbruner, a researcher at the application security firm Checkmarx. “If you ask the exact same LLM model to write for your specific source code, every single time it will have a slightly different output. One developer within the team will generate one output and the other developer is going to get a different output. So that introduces an additional complication beyond open source.”

In a Checkmarx survey of chief information security officers, application security managers, and heads of development, a third of respondents said that more than 60 percent of their organization’s code was generated by AI in 2024. But only 18 percent of respondents said that their organization has a list of approved tools for vibe coding. Checkmarx polled thousands of professionals and published the findings in August—emphasizing, too, that AI development is making it harder to trace “ownership” of code.

Open source projects can be inherently insecure, outdated, or at risk of malicious takeover. And they can be incorporated into codebases without adequate transparency or documentation. But researchers point out that some of fundamental backstops and accountability mechanisms that have always existed in open source are missing or severely fragmented by AI-driven development.

“AI code is not very transparent,” says Dan Fernandez, Edera’s head of AI products. “In repositories like Github you can at least see things like pull requests and commit messages to understand who did what to the code, and there’s a way to trace back who contributed. But with AI code, there isn’t that same accountability of what went into it and whether it’s been audited by a human. And lines of code coming from a human could be part of the problem as well.”

Edera’s Zenla also points out that while vibe coding may seem like a low-cost way to create bare-bones applications and tools that might not otherwise exist for low-resource groups like small businesses or vulnerable populations, the ease of use comes with the danger of creating security exposure in these most at-risk and sensitive situations.

“There’s a whole lot of talk about using AI to help vulnerable populations, because it uses less effort to get to something usable,” Zenla says. “And I think these tools can help people in need, but I also think that the security implications of vibe coding will disproportionately impact people who can least afford it.”

Even in enterprise, where financial risk largely falls to the company, the personal fallout of a widespread vulnerability introduced as a result of vibe coding should weigh heavily.

“The fact is that AI-generated material is already starting to exist in code bases,” says Jake Williams, a former NSA hacker and current vice president of research and development at Hunter Strategy. “We can learn from advances in open source software-supply-chain security—or we just won’t, and it will suck.”

Related Posts

Leave a Reply