What is the future of the like button in the age of artificial intelligence? Max Levchin—the PayPal cofounder and Affirm CEO—sees a new and hugely valuable role for liking data to train AI to arrive at conclusions more in line with those a human decisionmaker would make.
It’s a well-known quandary in machine learning that a computer presented with a clear reward function will engage in relentless reinforcement learning to improve its performance and maximize that reward—but that this optimization path often leads AI systems to very different outcomes than would result from humans exercising human judgment.
To introduce a corrective force, AI developers frequently use what is called reinforcement learning from human feedback (RLHF). Essentially they are putting a human thumb on the scale as the computer arrives at its model by training it on data reflecting real people’s actual preferences. But where does that human preference data come from, and how much of it is needed for the input to be valid? So far, this has been the problem with RLHF: It’s a costly method if it requires hiring human supervisors and annotators to enter feedback.
And this is the problem that Levchin thinks could be solved by the like button. He views the accumulated resource that today sits in Facebook’s hands as a godsend to any developer wanting to train an intelligent agent on human preference data. And how big a deal is that? “I would argue that one of the most valuable things Facebook owns is that mountain of liking data,” Levchin told us. Indeed, at this inflection point in the development of artificial intelligence, having access to “what content is liked by humans, to use for training of AI models, is probably one of the singularly most valuable things on the internet.”
While Levchin envisions AI learning from human preferences through the like button, AI is already changing the way these preferences are shaped in the first place. In fact, social media platforms are actively using AI not just to analyze likes, but to predict them—potentially rendering the button itself obsolete.
This was a striking observation for us because, as we talked to most people, the predictions mostly came from another angle, describing not how the like button would affect the performance of AI but how AI would change the world of the like button. Already, we heard, AI is being applied to improve social media algorithms. Early in 2024, for example, Facebook experimented with using AI to redesign the algorithm that recommends Reels videos to users. Could it come up with a better weighting of variables to predict which video a user would most like to watch next? The result of this early test showed that it could: Applying AI to the task paid off in longer watch times—the performance metric Facebook was hoping to boost.
When we asked YouTube cofounder Steve Chen what the future holds for the like button, he said, “I sometimes wonder whether the like button will be needed when AI is sophisticated enough to tell the algorithm with 100 percent accuracy what you want to watch next based on the viewing and sharing patterns themselves. Up until now, the like button has been the simplest way for content platforms to do that, but the end goal is to make it as easy and accurate as possible with whatever data is available.”
He went on to point out, however, that one reason the like button may always be needed is to handle sharp or temporary changes in viewing needs because of life events or situations. “There are days when I wanna be watching content that’s a little bit more relevant to, say, my kids,” he said. Chen also explained that the like button may have longevity because of its role in attracting advertisers—the other key group alongside the viewers and creators—because the like acts as the simplest possible hinge to connect those three groups. With one tap, a viewer simultaneously conveys appreciation and feedback directly to the content provider and evidence of engagement and preference to the advertiser.
Another major impact of AI will be its increasing use to generate the content itself that is subject to people’s emotional responses. Already, growing amounts of the content—both text and images—being liked by social media users are AI generated. One wonders if the original purpose of the like button—to motivate more users to generate content—will even remain relevant. Would the platforms be just as successful on their own terms if their human users ceased to make the posts at all?
This question, of course, raises the problem of authenticity. During the 2024 Super Bowl halftime show, singer Alicia Keys hit a sour note that was noticed by every attentive listener tuned in to the live event. Yet when the recording of her performance was uploaded to YouTube shortly afterward, that flub had been seamlessly corrected, with no notification that the video had been altered. It’s a minor thing (and good for Keys for doing the performance live in the first place), but the sneaky correction raised eyebrows nonetheless. Ironically, she was singing “If I Ain’t Got You”—and her fans ended up getting something slightly different from her.
If AI can subtly refine entertainment content, it can also be weaponized for more deceptive purposes. The same technology that can fix a musical note can just as easily clone a voice, leading to far more serious consequences.
More chilling is the trend that the US Federal Communications Commission (FCC) and its equivalents elsewhere have recently cracked down on: uses of AI to “clone” an individual’s voice and effectively put words in their mouth. It sounds like them speaking, but it may not be them—it could be an impostor trying to trick that person’s grandfather into paying a ransom or trying to conduct a financial transaction in their name. In January 2024, after an incident of robocalls spoofing President Joe Biden’s voice, the FCC issued clear guidance that such impersonation is illegal under the provisions of the Telephone Consumer Protection Act, and warned consumers to be careful.
“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” said FCC chair Jessica Rosenworcel. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”
Short of fraudulent pretense like this, an AI-filled future of social media might well be populated by seemingly real people who are purely computer-generated. Such virtual concoctions are infiltrating the community of online influencers and gaining legions of fans on social media platforms. “Aitana Lopez,” for example, regularly posts glimpses of her enviable life as a beautiful Spanish musician and fashionista. When we last checked, her Instagram account was up to 310,000 followers, and she was shilling for hair-care and clothing brands, including Victoria’s Secret, at a cost of some $1,000 per post. But someone else must be spending her hard-earned money, because Aitana doesn’t really need clothes or food or a place to live. She is the programmed creation of an ad agency—one that started out connecting brands with real human influencers but found that the humans were not always so easy to manage.
With AI-driven influencers and bots engaging with each other at unprecedented speed, the very fabric of online engagement may be shifting. If likes are no longer coming from real people, and content is no longer created by them, what does that mean for the future of the like economy?
In a scenario that not only echoes but goes beyond the premise of the 2013 film Her, you can also now buy a subscription that enables you to chat to your heart’s content with an on-screen “girlfriend.” CarynAI is an AI clone of a real-life online influencer, Caryn Marjorie, who had already gained over a million followers on Snapchat when she decided to team up with an AI company and develop a chatbot. Those who would like to engage in one-to-one conversation with the virtual Caryn pay a dollar per minute, and the chatbot’s conversation is generated by OpenAI’s GPT-4 software, as trained on an archive of content Marjorie had previously published on YouTube.
We can imagine a scenario in which a large proportion of likes are not awarded to human-created content—and not granted by actual people, either. We could have a digital world overrun by synthesized creators and consumers interacting at lightning speed with each other. Surely if this comes to pass, even in part, there will be new problems to be solved, relating to our needs to know who really is who (or what), and when a seemingly popular post is really worth checking out.
Do we want a future in which our true likes (and everyone else’s) are more transparent and unconcealable? Or do we want to retain (for ourselves but also for others) the ability to dissemble? It seems plausible that we will see new tools developed to provide more transparency and assurance as to whether a like is attached to a real person or just a realistic bot. Different platforms might apply such tools to different degrees.
Excerpt adapted from Like: The Button That Changed the World by Martin Reeves and Bob Goodson. Published by arrangement with HBR Press. Copyright © 2025 by Martin Reeves and Bob Goodson.