In the summer of 1956, a group of academics—now we’d call them computer scientists but there was no such thing then—met on Dartmouth College campus in New Hampshire to discuss how to make machines think like humans. One of them, John McCarthy, coined the term “artificial intelligence.” This legendary meeting and the naming of a new field, is well known.
In this century, a variation of the term has stepped to the forefront: artificial general intelligence, or AGI—the stage at which computers can match or surpass human intelligence. AGI was the driver of this week’s headlines: a deal between OpenAI and Microsoft that hinged on what happens if OpenAI achieves it; massive capital expenditures from Meta, Google, and Microsoft to pursue it; the thirst to achieve it helping Nvidia to become a $5 trillion company. US politicians have said if we don’t get it before China does, we’re cooked. Prognosticators say we might get it before the decade is out, and it will change everything. The origin of that term, however, and how it was originally defined, is not so well-known. But there is a clear answer to that question. The person who first came up with the most important acronym of the 21st century so far— as well as a definition that is still pretty much the way we think of it today—is unfamiliar to just about everybody. This is his story.
Nano Nerd
In 1997, Mark Gubrud was obsessed with nanotechnology and its perils. He was a fanboy of Eric Drexler, who popularized the science of the very very small. Gubrud began attending nanotech conferences. His particular concern was how that technology, and other cutting-edge science, could be developed as dangerous weapons of war. “I was a grad student sitting in the sub-sub basement at the University of Maryland, listening to a huge sump pump come on and off very loudly, right behind my desk, and reading everything that I could,” he tells me on a Zoom call from the porch of a cabin in Colorado.
That same year, Gubrud submitted and presented a paper at the Fifth Foresight Conference on Molecular Nanotechnology, called “Nanotechnology and International Security.” He argued that breakthrough technologies will redefine international conflicts, making them potentially more catastrophic than nuclear war. He urged nations to “give up the warrior tradition.” The new sciences he discussed included nanotechnology, of course, but also advanced AI—which he referred to as, yep, “artificial general intelligence.” It seems that no one had previously employed that phrase. Later in the paper he defined it:
“By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.”
Drop the last clause and you have the definition of AGI that most people use today.
“I needed a word to distinguish the AI that I was talking about from the AI that people knew at the time, which was expert systems, and it was pretty clear that was not going to be the kind of general intelligence they were,” he explains. The paper wasn’t circulated widely, and its impact was minimal.
Real AI
Fast forward to the early 2000s, a time when AI Winter still chilled the field. Some perceptive researchers sensed a thaw. In 1999, Ray Kurzweil predicted in his book The Age of Spiritual Machines that AI would be able to match human cognition by around 2030. This struck a chord with computer scientist Ben Goertzel, who began working with like-minded collaborator Cassio Pennachin to edit a book on approaches to AI that could be deployed for wide use, as opposed to using machine learning to address specific and bounded domains, like playing chess or coming up with medical diagnoses.
Kurzweil had referred to this more sweeping technology as “strong AI,” but that seemed fuzzy. Goertzel toyed with calling it “real AI,” or maybe “synthetic intelligence.” Neither alternative enchanted the book’s contributors, so he invited them to bat around other ideas. The thread included future AI influencers like Shane Legg, Pei Wang, and Eliezer Yudkowsky (yep, the guy who would become the doomer-in-chief).
Legg, who then had a master’s degree and had worked with Goertzel, came up with the idea to add the word “general” to AI. As he puts it now, “I said in an email, ‘Ben, don’t call it real AI—that’s a big screw you to the whole field. If you want to write about machines that have general intelligence, rather than specific things, maybe we should call it artificial general intelligence or AGI. It kind of rolls off the tongue.” Goertzel recalls that Wang suggested a different word order, suggesting the pursuit should be called general artificial intelligence. Goertzel noted that when pronounced out loud the acronym GAI might introduce an unintended connotation. “Not that there’s anything wrong with that,” he quickly adds. They stuck with Legg’s AGI.
Wang, who now teaches at Temple University, says he only vaguely remembers the discussion but says he might have suggested some alternatives. More importantly, he tells me that what those contributors dubbed AGI in circa 2002 is “basically the original AI.” The Dartmouth founders envisioned machines that would express intelligence with the same breadth as humans did. “We needed a new label because the only one had changed its common usage,” he says.
The die was cast. “We all started using it in some online forums, this phrase AGI,” says Legg. (He didn’t always use it: “I never actually mentioned AGI in my PhD thesis, because I thought it would be too controversial,” he says.) Goerztel’s book, Artificial General Intelligence, didn’t come out until mid-decade, but by then the term was taking off, with a journal and conference by that name.
Gubrud did manage to claim credit in naming AGI. In the mid-2000s, Gubrud himself called it to the attention of those popularizing the term. As Legg puts it, “Somebody pops up out of the woodwork and says, ‘Oh, I came up with the term in ‘97,’ and we’re like, ‘Who the hell are you?’ And then sure enough, we looked it up, and he had a paper that had it. So [instead of inventing it] I kind of reinvented the term.” (Legg of course is the cofounder and chief AGI scientist at Google’s DeepMind.)
Gubrud attended the second AGI conference in 2006 and met Goertzel briefly. He never met Legg, though over the years he occasionally interacted with him online, always in a friendly manner. Gubrud understands that his own lack of follow-up edged him out of the picture.
“I will accept the credit for the first citation and give them credit for a lot of other work that I didn’t do, and maybe should have—but that wasn’t my focus.” he says. “My concern was the arms race. The whole point of writing that paper was to warn about that.” Gubrud hasn’t been prolific in producing work after that—his career has been peripatetic, and he now spends a lot of time caring for his mother—but he has authored a number of papers arguing for a ban on autonomous killer robots and the like.
Gubrud can’t ignore the dissonance between his status and that of the lords of AGI. “It’s taking over the world, worth literally trillions of dollars,” he says. “And I am a 66-year-old with a worthless PhD and no name and no money and no job.” But Gubrud does have a legacy. He gave a name to AGI. His definition still stands. And his warnings about its dangers are still worth listening to.
This is an edition of Steven Levy’s Backchannel newsletter. Read previous newsletters here.



