RSS News Feed

TikTok launched community notes. Why are social media sites betting on crowdsourced fact-checking?



ADVEReadNOWISEMENT

TikTok is the latest social media platform to launch a crowdsourced fact-checking feature.

The short-form video app is rolling out the feature, called Footnotes, first in the United States. It lets users write a note with more context on a video and vote on whether other comments should appear under a video. 

A footnote could share a researcher’s view on a “complex STEM-related topic” or highlight new statistics to give a fuller picture on an ongoing event, the company said. 

The new feature is similar to other community-based fact-checking features on social media platforms such as X and Meta’s Facebook or Instagram. But why are social media giants moving towards this new system to fact-check online claims? 

What is community fact-checking?

Scott Hale, an associate professor at the Oxford Internet Institute, said that Twitter, now X, started the charge to community notes in 2021 with a feature called Birdwatch. The experiment carried on after Elon Musk took control of the company in 2022. 

Otavio Vinhas, a researcher at the National Institute of Science and Technology in Informational Disputes and Sovereignties in Brazil, said that Meta’s introduction of a community notes programme earlier this year is in line with a trend led by US President Donald Trump to move towards a more libertarian view of free speech on social media. 

“The demand is that platforms should commit to this [libertarian view],” Vinhas told Euronews Next.

“For them, fair moderation would be moderation that prioritises free speech without much concern to the potential harm or the potential false claims it can push up”. 

Hale told Euronews Next there is some scientific proof behind crowdsourcing, with studies showing that crowds could often arrive at the right verdict when evaluating whether information was well fact-checked or not. They often agreed with professionals, he said.

But TikTok’s Footnotes is slightly different than other crowdsourcing initiatives on Meta or X, Vinhas said. 

That’s because the programme still asks users to add the source of information for their note, which Vinhas says is not mandatory on X. 

Most notes don’t end up on the platforms

Where the challenge lies for all social media companies is getting the right people to see the notes, Hale said. 

All three community programmes use a bridge-based ranking system that ranks how similar you are to another user based on the content that a user consumes, based either on the other accounts they follow or the videos they watch, Hale said. 

The algorithm shows the content to users that are considered “dissimilar” to each other to see if they both find the note helpful, Hale said. Notes that pass the test will then be visible on the platform. 

What tends to happen, though, is that the vast majority of the notes that are written on the platform are actually never seen, Vinhas said. 

A June study from the Digital Democracy Institute of the Americas (DDIA) of English and Spanish community notes on X found that over 90 per cent of the 1.7 million notes available on a public database never made it online. 

Notes that did make it to the platform took an average of 14 days to be published, down from 100 days in 2022, even though there are still delays to how quickly X responds to these notes, the DDIA report continued. 

“I don’t think these platforms can achieve the promise of bringing consensus and make the internet this marketplace of ideas in which the best information and the best ideas end up winning the argument,” Vinhas said.

Hale said it can be difficult for users to come across notes that might contradict their point of view because of “echo chambers” on social media, where content is shown that reinforces the beliefs that users already have. 

“It’s very easy to get ourselves into parts of networks that are similar to us,” he said. 

One way to improve the efficiency of community notes would be to gamify them, Hale continued. He suggested the platforms could follow Wikipedia’s example, where contributing users have their own page with their edits. 

The platform also offers a host of service awards to editors based on the value of their contributions and the length of their service, and lets them take part in contests and fundraisers. 

What else do social media sites do to moderate content on their platforms?

Community fact-checking is not the only method that social media companies use to limit the spread of mis- or disinformation on their platforms, Hale and Vinhas said. 

Meta, X, and TikTok all use some degree of automated moderation to distinguish potentially harmful or violent content.

Over at Meta, the company said it relies on artificial intelligence (AI) systems to scan content proactively and immediately remove it if it matches known violations of its community standards or code of conduct. 

When that content is flagged, human moderators review individual posts to see if the content actually breaches the code or if some context is missing. 

Hale said that it can be difficult for automated systems to flag new problematic content because it recognises the repeated claims of misinformation that it has been trained on, meaning new lies can slip through the cracks. 

Users themselves can also report to the platforms when there is a piece of content that may violate community standards, Hale said. 

However, Meta said that community notes would replace relationships with conventional fact-checkers, who flagged and labeled misinformation for almost a decade in the United States.

So far, there are no signs that the platform will end these partnerships in the United Kingdom and the European Union, media reports suggest.

Hale and Vinhas said professional fact-checking and community notes can actually complement one another if done properly. 

In that case, platforms would have an engaged community of people adding context in notes as well as the rigor of professional fact-checkers who can take additional steps, such as calling experts or going straight to a source to verify whether something is true or not, Hale added. 

Professional fact-checkers often have context as well to the political, social, and economic pulse of the countries where disinformation campaigns may be playing out, Vinhas said. 

“Fact-checkers will be actively monitoring [a] political crisis on a 24-7 basis almost, while users may not be as much committed to information integrity,” he said. 

For now, Vinhas said TikTok’s model is encouraging because it is being used to contribute to a “global fact-checking programme,” but he said there’s no indication whether this will continue to be the case. 



Source link