Meta, the proprietor of Fb and different social media platforms, will implement major changes to its content material moderation insurance policies, founder Mark Zuckerberg introduced this week in a video titled, “Extra speech and fewer errors”.
Among the many adjustments, Meta’s use of fact-checking organisations will finish abolished and the group will swap to a system of neighborhood notes – just like these utilized by the X platform – as a substitute.
The transfer, revealed on Tuesday, comes as tech executives brace for the arrival of incoming US President Donald Trump, whose right-wing supporters have lengthy decried on-line content material moderation as a instrument of censorship.
So why is that this taking place now and can it result in extra misinformation?
In a video posted to social media platforms, Zuckerberg defined that Meta plans to scrap fact-checking in favour of a brand new system of neighborhood notes which customers can use to establish posts of others which will have deceptive or falsified data. Meta plans to roll this neighborhood notice system out within the subsequent coming months.
Zuckerberg stated fact-checking organisations had proved to be “biased” when it got here to deciding on content material to reasonable and added that he wished to make sure free speech on all platforms. “It’s time to get again to our roots round free expression,” he wrote within the submit with the five-minute video.
“Our system connected actual penalties within the type of intrusive labels and decreased distribution. A programme supposed to tell too usually grew to become a instrument to censor.”
Whereas this coverage will prolong to all topic issues, Zuckerberg singled out the problems of “gender and immigration” particularly.
Meta’s upcoming modifications will take impact throughout its trio of main social media platforms: Fb, Instagram and Threads, that are utilized by greater than 3 billion individuals worldwide.
Is Meta additionally transferring operations to Texas? Why?
Meta plans to relocate its content material moderation groups from California to Texas, hoping the transfer will “assist us construct belief” whereas having “much less concern in regards to the bias of our groups”. Some consultants see the transfer as politically motivated and will have unfavorable implications on how political content material is dealt with on Meta’s platforms.
“This determination to maneuver to Texas is born out of each some practicality and in addition some political motivation,” said Samuel Woolley, the founder and former director of propaganda analysis on the College of Texas at Austin’s Middle for Media Engagement who spoke to digital newsgroup The Texas Tribune.
“The notion of California in america and amongst these within the incoming [presidential] administration may be very completely different than the notion of Texas,” he added.
Zuckerberg seems to be following within the footsteps of Musk, who shifted Tesla’s headquarters to Austin, Texas in 2021. In an X submit in July, Musk additionally expressed curiosity in transferring his different ventures, X and SpaceX, from California to Texas, citing Governor Gavin Newsom’s not too long ago enacted SAFETY Act which prevents colleges from mandating academics to inform mother and father when a scholar requests to be recognised by a “gender identification” that differs from their intercourse.
And 𝕏 HQ will transfer to Austin https://t.co/LUDfLEsztj
— Elon Musk (@elonmusk) July 16, 2024
How has content material moderation on Meta platforms labored till now?
Presently, social media platforms like Fb and Threads use third-party fact-checking organisations to confirm the authenticity and accuracy of content material posted to every platform.
These organisations consider content material and flag misinformation for additional scrutiny. When a fact-checker determines a bit of content material to be false, Meta will take motion to considerably restrict that piece of content material’s attain, guaranteeing that it reaches a considerably smaller viewers. Nevertheless, third-party fact-checkers wouldn’t have the authority to delete content material, droop accounts or take away pages from the platform. Solely Meta can take away content material from its platforms that violates its Group Requirements and Adverts insurance policies. This contains, however shouldn’t be restricted to, hate speech, fraudulent accounts and terrorist-related materials.
Since 2016, Meta has labored with greater than 90 fact-checking organisations in additional than 60 languages all over the world. Among the main fact-checking organisations it really works with embody PolitiFact, Verify Your Truth, FactCheck.org and AFP Truth Verify. Some partnerships with fact-checking organisations return practically 10 years, with PolitiFact being one of many earliest to affix forces with Meta in 2016.
How will the brand new moderation work?
Much like X, previously generally known as Twitter earlier than it was purchased for $44bn by billionaire Elon Musk in 2022, Meta will use Group Notes to reasonable content material as a substitute of fact-checkers.
X’s Group Notes, beforehand generally known as BirdWatch, was piloted in 2021 and gained important traction in 2023 as a characteristic designed to establish and spotlight doubtlessly deceptive data on the platform.
Community Notes appear in boxes labelled “Readers added context” below posts on X which have been identified as potentially misleading or inaccurate. A Community Note typically provides a correction or clarification, frequently supported by a hyperlink to a reputable online source which can verify the information provided.
These annotations are crafted by eligible platform users who have opted into the programme. As long as a user has no X violations on their account since January 2023, has a verified phone number provided by a legitimate mobile carrier and their platform account is at least six months old, they are eligible to participate.
Once approved by X as a Notes contributor, participants may rate other Community Notes as “Helpful or “Not Helpful”. Contributors receive a “Rating Impact” score that reflects the frequency with which their ratings influence notes that achieve “Helpful” or “Not Helpful” status. A Rating Impact score of 5 allows a contributor to progress to the next level, and write Contributor Notes for X posts as well as rate them.
Community Notes which receive five or more ratings undergo algorithmic evaluation. The algorithm categorises each note as either “Helpful”, “Not Helpful” or “Needs more ratings”. At this point, the Notes are not yet visible to X users, only contributors.
Only those Notes which receive a final “Helpful” status from the algorithm are shown to all X users beneath the corresponding post.
Although Meta has not outlined exactly how its community notes would work, Zuckerberg stated in his video that they would be similar to X’s community note system.
There is debate about how well Community Notes work on X, which has 600 million users.
Yoel Roth, the former head of Twitter’s trust and safety department stated in a BlueSky post: “Genuinely baffled by the unempirical assertion that Community Notes ‘works.’ Does it? How do Meta know? The best available research is pretty mixed on this point. And as they go all-in on an unproven concept, will Meta commit to publicly releasing data so people can actually study this?”
However, some research into the effectiveness of Community Notes has been carried out.
In October 2024, the University of Illinois published a working paper about X’s Community Notes feature, led by assistant professor of business administration Yang Gao. In general, his study results were positive.
“We find that receiving a displayed community note increases the likelihood of tweet retraction, thus underscoring the promise of crowdchecking. Our mechanism tests reveal that this positive effect mainly stems from consideration of users who had actively interacted with the misinformation (ie, observed influence), rather than of users who might have passively encountered or would encounter the misinformation (ie, presumed influence),” Gao explained in the paper.
Another research paper led by University of Luxembourg researchers published in April 2024 in Open Science Framework (OSF) – which allows researchers to share their academic papers – found the use of Community Notes reduced the spread of misleading posts by an average of 61.4 percent.
However, the research paper added, “Our findings also suggest that Community Notes might be too slow to intervene in the early (and most viral) stage of the diffusion.”
A recent analysis of Notes mentioning election claims between January 1 and August 25, 2024, by The Center for Countering Digital Hate (CCDH) also revealed limits to the effectiveness of the X Community Notes feature.
The researchers examined 283 posts containing election-related claims that independent fact-checking organisations had determined to be false or misleading. In its analysis, it focused exclusively on posts which had received at least one proposed note from Community Notes contributors. It found that 209 out of 283 “misleading” posts in its sample were not being shown to all X users – “equivalent to 74 percent,” the report stated – because they had not reached the ranking of “helpful” despite being accurate.
It added that the delay in moving a post to “helpful” status contributed to this.
According to the Washington Post, which did a separate analysis of the data, only 7.4 percent of Community Notes proposed in 2024 related to the election were ever shown and the number dropped to 5.7 percent in October.
How have fact-checking organisations reacted to Meta’s decision to switch to Community Notes?
Some fact-checking organisations have criticised the move, saying it is unnecessary and politically driven.
“Facts are not censorship. Fact-checkers never censored anything. And Meta always held the cards,” said Neil Brown, president of the Poynter Institute, the journalism nonprofit that owns PolitiFact in a public statement. “It’s time to quit invoking inflammatory and false language in describing the role of journalists and fact-checking.”
“We’ve learned the news as everyone has today. It’s a hard hit for the fact-checking community and journalism. We’re assessing the situation,” the news agency AFP, which operates AFP Fact Check, said in a statement.
Who else has criticised the move and why?
Some experts in social media have cautioned that the change may open the door to an increase in misinformation appearing on Meta platforms.
“I suspect we will see a rise in false and misleading information around a number of topics, as there will be an incentive for those who want to spread that kind of content,” Claire Wardle, an associate professor in communication at Cornell University told Vox, a digital media company and news website.
Others believe Meta is deliberately aiming to placate the right wing as a second Trump presidency looms and is opening the door for more MAGA-centred content.
Lina Khan, who chairs the Federal Trade Commission, expressed concern during a CNBC interview on Tuesday that Meta executives may be pursuing favourable treatment from the Trump administration. She suggested that the company might be attempting to secure a “sweetheart deal” with the White House.
“I think that Mark Zuckerberg is trying to follow in Elon’s footsteps, which means that actually, they’re going to use this guise of free speech to actually suppress critics of Trump and critics of themselves,” Representative for New York Alexandria Ocasio-Cortez told Business Insider.
“There’s been a shift rightward in terms of attitudes toward free speech in Silicon Valley and perhaps this decision is part of that,” Sol Messing, a research associate professor at New York University’s Center for Social Media and Politics and a former research scientist at Facebook, told ABC News.
Nate Silver, founder of FiveThirtyEight and a political pollster who now runs the Silver Bulletin blog on Substack, gave his take on the change in a recent blog post: “As someone who tries to be non-hypocritically pro-free speech, my inclination is to welcome the changes. But Zuck’s motivations are questionable: there’s no doubt that Meta and other media companies are under explicit and intense political pressure from the incoming Trump administration. So perhaps it’s the right move for the wrong reasons.”
Who has welcomed the move?
Some big names in social media have actively welcomed Meta’s announcement.
Elon Musk, CEO of Tesla and who will lead Trump’s Department of Government Efficiency (DOGE) with former Republican presidential hopeful Vivek Ramaswamy stated, “This is cool” on a recent X post.
This is cool pic.twitter.com/kUkrvu6YKY
— Elon Musk (@elonmusk) January 7, 2025
Cenk Yuger, founder of the Young Turks, a major left-leaning digital news channel welcomed the change in an X post. “He actually mentioned something that is not getting a lot of attention. He said, basically we are done with relying on legacy media,” he wrote. “That’s who the ‘fact checkers’ were in the past. Legacy media has an enormous agenda. They are not at all objective.”
.@cenkuygur points out something many people are missing in Mark Zuckerberg’s statement on Meta’s “fact checking.” https://t.co/LDGdTcQhkD pic.twitter.com/JbsGKtumMd
— The Young Turks (@TheYoungTurks) January 9, 2025
The move is “a great step toward the decentralisation of information and the end to the control legacy media has had on the prevailing narrative”, said Christopher Townsend, an Air Force vet and conservative rapper with more 300,000 Instagram who spoke to Business Insider.
“It seems like Meta is finally taking a page from Elon Musk’s playbook & letting Americans make decisions for themselves. It’s about time Meta owned up to censoring Americans,” Republican Representative Randy Weber of Texas who spoke to Business Insider
President-elect Donald Trump appeared to believe he had played a role in Meta’s revised content moderation policy when he spoke at a news conference at Mar-a-Lago on Tuesday. Asked if his previous criticism of the company had prompted the change, Trump’s response was succinct: “Probably.”
How will Meta’s change impact regions outside the US?
Although the initial rollout of the new feature will take place in the next couple of months in the United States only, Zuckerberg also referred to other regions and countries in his video, including Europe, China and Latin America.
“Europe has an ever-increasing number of laws, institutionalising censorship and making it difficult to build anything innovative there,” he said. “Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years when even the US government has pushed for censorship.”
In a recent statement, the EU rejected Meta’s claims that it has engaged in any form of censorship under its digital regulation.
“We absolutely refute any claims of censorship on our side,” European Commission spokesperson Paula Pinho told reporters in Brussels.