Meta’s ‘Free Expression’ Push Results In Far Fewer Content Takedowns

meta’s-‘free-expression’-push-results-in-far-fewer-content-takedowns

Meta announced in January it would end some content moderation efforts, loosen its rules, and put more emphasis on supporting “free expression.” The shifts resulted in fewer posts being removed from Facebook and Instagram, the company disclosed Thursday in its quarterly Community Standards Enforcement Report. Meta said that its new policies had helped reduce erroneous content removals in the US by half without broadly exposing users to more offensive content than before the changes.

The new report, which was referenced in an update to a January blog post by Meta global affairs chief Joel Kaplan, shows that Meta removed nearly one third less content on Facebook and Instagram globally for violating its rules from January to March of this year than it did in the previous quarter, or about 1.6 billion items compared to just under 2.4 billion, according to an analysis by WIRED. In the past several quarters, the tech giant’s total quarterly removals had previously risen or stayed flat.

Across Instagram and Facebook, Meta reported removing about 50 percent fewer posts for violating its spam rules, nearly 36 percent for child endangerment, and almost 29 percent for hateful conduct. Removals increased in only one major rules category—suicide and self-harm content—out of the 11 Meta lists.

The amount of content Meta removes fluctuates regularly from quarter to quarter, and a number of factors could have contributed to the dip in takedowns. But the company itself acknowledged that “changes made to reduce enforcement mistakes” was one reason for the large drop.

“Across a range of policy areas we saw a decrease in the amount of content actioned and a decrease in the percent of content we took action on before a user reported it,” the company wrote. “This was in part because of the changes we made to ensure we are making fewer mistakes. We also saw a corresponding decrease in the amount of content appealed and eventually restored.”

Meta relaxed some of its content rules at the start of the year that CEO Mark Zuckerberg described as “just out of touch with mainstream discourse.” The changes allowed Instagram and Facebook users to employ some language that human rights activists view as hateful toward immigrants or individuals that identify as transgender. For example, Meta now permits “allegations of mental illness or abnormality when based on gender or sexual orientation.”

As part of the sweeping changes, which were announced just as Donald Trump was set to begin his second term as US president, Meta also stopped relying as much on automated tools to identify and remove posts suspected of less severe violations of its rules because it said they had high error rates, prompting frustration from users.

During the first quarter of this year, Meta’s automated systems accounted for 97.4 percent of content removed from Instagram under the company’s hate speech policies, down by just one percentage point from the end of last year. (User reports to Meta triggered the remaining percentage.) But automated removals for bullying and harassment on Facebook dropped nearly 12 percentage points. In some categories, such as nudity, Meta’s systems were slightly more proactive compared to the previous quarter.

Users can appeal content takedowns, and Meta sometimes restores posts that it determines have been wrongfully removed. In the update to Kaplan’s blog post, Meta highlighted the large decrease in erroneous takedowns. “This improvement follows the commitment we made in January to change our focus to proactively enforcing high-severity violations and enhancing our accuracy through system audits and additional signals,” the company wrote.

Some Meta employees told WIRED in January that they were concerned the policy changes could lead to a dangerous free-for-all on Facebook and Instagram, turning the platforms into increasingly inhospitable places for users to converse and spend time.

But according to its own sampling, Meta estimates that users were exposed to about one to two pieces of hateful content on average for every 10,000 posts viewed in the first quarter, down from about two to three at the end of last year. And Meta’s platforms have continued growing—about 3.43 billion people in March used at least one of its apps, which include WhatsApp and Messenger, up from 3.35 billion in December.

Related Posts

Leave a Reply