More than 200 contractors who worked on evaluating and improving Google’s AI products have been laid off without warning in at least two rounds of layoffs last month. The move comes amid an ongoing fight over pay and working conditions, according to workers who spoke to WIRED.
In the past few years, Google has outsourced its AI rating work—which includes evaluating, editing, or rewriting the Gemini chatbot’s response to make it sound more human and “intelligent”—to thousands of contractors employed by Hitachi-owned GlobalLogic and other outsourcing companies. Most raters working at GlobalLogic are based in the US and deal with English-language content. Just as content moderators help purge and classify content on social media, these workers use their expertise, skill, and judgment to teach chatbots and other AI products, including Google’s search summaries feature called AI Overviews—the right responses on a wide range of subjects. Workers allege that the latest cuts come amid attempts to quash their protests over issues including pay and job insecurity.
These workers, who often are hired because of their specialist knowledge, had to have either a master’s or a PhD to join the super rater program, and typically include writers, teachers, and people from creative fields.
“I was just cut off,” says Andrew Lauzon, who received an email with the news of his termination on August 15. “I asked for a reason, and they said ramp-down on the project—whatever that means.” He joined GlobalLogic in March 2024, where his work ranged from rating AI outputs to coming up with a variety of prompts to feed into the model.
Lauzon says this move by the company shows the precarity of such content moderation jobs. He alleges that GlobalLogic started regularly laying off its workers this year. “How are we supposed to feel secure in this employment when we know that we could go at any moment?” he added.
Workers still at the company claim they are increasingly concerned that they are being set up to replace themselves. According to internal documents viewed by WIRED, GlobalLogic seems to be using these human raters to train the Google AI system that could automatically rate the responses, with the aim of replacing them with AI.
At the same time, the company is also finding ways to get rid of current employees as it continues to hire new workers. In July, GlobalLogic made it mandatory for its workers in Austin, Texas, to return to office, according to a notice seen by WIRED. This has directly impacted several workers who either cannot afford to travel to the office due to financial constraints or cannot go to work due to disabilities or caregiving responsibilities.
Despite handling work they describe as skilled and high-stakes, eight workers who spoke to WIRED say they are being underpaid and suffer from lack of job security and unfavorable working conditions. These alleged conditions have impacted worker morale and challenged the ability for people to execute their jobs well, sources say. Some contractors attempted to unionize earlier this year but claim those efforts were quashed. Now they allege that the company has retaliated against them. Two workers have filed a complaint with the National Labor Relations Board, alleging they were unfairly fired, one due to bringing up wage transparency issues, and the other for advocating for himself and his coworkers.
“These individuals are employees of GlobalLogic or their subcontractors, not Alphabet,” Courtenay Mencini, a Google spokesperson, said in a statement. “As the employers, GlobalLogic and their subcontractors are responsible for the employment and working conditions of their employees. We take our supplier relations seriously and audit the companies we work with against our Supplier Code of Conduct.” GlobalLogic declined to comment.
For a decade, software company GlobalLogic had a team of “generalist raters” who would help rate Google’s search results. In the spring of 2023, Google asked GlobalLogic to assemble a team of “super raters” to evaluate its AI products, starting with AI Overviews.
Ricardo Levario, a teacher from Texas, was hired among the first batch of super raters. Back then he worked on Google’s “search generative experience,” where the search results would display an AI generated summary—now renamed AI Overviews. His job was to determine how well the model performed and rewrite its responses to make sure it was grounded and made better use of sources.
“After the success [of this pilot], we learned that Google was interested in growing the program, and they were going to bring on cohorts of 20 people every week,” says Levario, adding that the company eventually ended up hiring as many as 2,000 super raters to work on Google’s AI. But problems began when GlobalLogic started using third-party contractors to ramp up hiring, Levario claims—because while GlobalLogic super raters’ pay ranged from $28 to $32 an hour, the super raters brought in via third-party contractors were paid $18 to $22 an hour for the same work. The company also has a few hundred “generalist raters” for its AI products who don’t necessarily have a higher degree like the super raters. One such generalist rater, Alex, who was hired in 2023 to rate the bot’s output based on the guidelines provided—which included “fluffier” questions like asking about the closest restaurant but also “not as savory” ones. She says that she hasn’t received a “notable pay increase” despite being pulled into “more demanding” projects. (Alex requested that WIRED identify her by first name only due to privacy concerns).
“We as raters play an incredibly vital role, because the engineers between messing with the code and everything, they’re not going to have the time to fine tune and get the feedback they need for the bot,” says Alex. “We’re like the lifeguards on the beach—we’re there to make sure nothing bad happens.” Alex was eventually able to secure a full time position with GlobalLogic, but she alleges that roughly 80 percent of her project folks continue to be on contract, without any benefits or paid time off.
At the end of 2023, workers created a WhatsApp group named Super Secret Secondary Location with around 80 members where some of them started discussing ways to organize. In the spring of 2024, some of these workers got together with the Alphabet Workers Union to discuss ways to create a GlobalLogic chapter for the AI raters to be able to demand better pay and working conditions. “We started building the movement underground,” says Levario. “We were essentially laying down the foundation for our union, developing our systems.” By December 2024, their chapter had 18 members.
Around that time, workplace frustrations were only growing. Alex, along with several other workers, was pulled into a project a few months prior which she initially thought would lead to promotions. But, instead it led to intensified workplace stress. Alex says that in this project, their task timers were set at five minutes, raising concerns amongst her and her coworkers that they are “sacrificing quality at this point.” “I don’t even keep count of how many I do in a day,” says Alex. “I just focus more on the timer than anything else—it’s gone from mentally stimulating work to mind-numbing.” She added that she often does not reach that metric of completing each task within five minutes and that the company has been “threatening many of us with losing our job or the project in general if we don’t get these numbers down.”
In January when a worker quit and left messages on their social channels and via email asking the workers to organize, things started to spiral. It “opened the floodgates,” and workers started having conversations about working conditions and wage parity on these social channels. “GlobalLogic’s reaction was to suppress the conversation, so they began deleting threads,” claims Levario. “One team lead even told us that we were violating company policies, which wasn’t true—there was no company policy around that.” Later that month, to channel this worker agitation into action, Levario—who was one of the more vocal organizers—shared a pay and condition survey in the social channels. This worked, and the union membership grew from 18 to 60 by February.
After this incident, however, things unraveled quickly. In the first week of February, many workers received an email saying that the company’s social channels—which was a way for all the remote workers to connect and forge friendships—were banned from use during work hours. These were Google’s chat spaces for all sorts of groups and interests—ranging from queer and gay people to video gamers and writers. “The social spaces helped us to feel less robotic and more human,” says Faith Frontera, about the company eliminating the use of social channels. “It’s important especially in a remote environment where you don’t get to see your coworkers face-to-face.” Frontera joined GlobalLogic as a generalist rater to annotate, proofread, and write responses for Gemini and Magi, which is Google’s new project to integrate AI in search.
Many workers believe that the banning of social spaces was a direct result of workers discussing the pay parity. “I believe that [because] the union was happening, people were discussing their pay and stuff, painting a bad picture” of GlobalLogic, claims a super rater who joined the company two years ago, requesting anonymity to speak freely. “And so they did it as a means to stop us from communicating with one another and that’s what made the environment hostile.”
Even as the company restricted the use of the social spaces, Levario continued to engage on the social channels, following which he was called into a meeting to warn him from using these spaces. Levario then filed a whistleblower complaint with Hitachi. Four days later, Levario received a response to his complaint and a calendar invite. During the five minute call, Levario was fired. They told him they were terminating his contract “for violating the social spaces policy.”
Labor researchers allege this is how it typically plays out around the world with contracting agencies and workers. “This is the playbook,” says Mila Miceli, a research lead at DAIR Institute, an organization that works with AI data workers around the globe. “We have seen this in other places, almost every outsourcing company doing data work where workers have tried to collectivize and organize—this has been difficult. They have suffered retaliation.”
Globally, other AI contract workers are fighting back and organizing for better treatment and pay. Earlier this year, a group of Kenyan AI data labelers formed the Data Labelers Association in a bid to fight for better pay and working conditions and mental health support. At the same time, content moderators from around the world, who have faced and continue to deal with similar issues, formed the global trade union alliance in April. The Global Trade Union Alliance of Content Moderators includes workers from Kenya, Turkey, and Colombia.
Those that remain working at GlobalLogic say they are afraid to speak up because they may lose their jobs. “It’s just been kind of [an] oppressive atmosphere,” says Alex. “We can’t really organize—we’re afraid that if we talk we’re going to get fired or laid off.”