SAN FRANCISCO – Twitter announced a new feature to allow users to flag content that could contain misinformation, a scourge that has only grown during the pandemic.
“We’re testing a feature for you to report Tweets that seem misleading – as you see them,” the social network said from its safety and security account.
A button would be visible to some users from the United States (US), South Korea and Australia to choose “it’s misleading” after clicking “report tweet”.
Users can then be more specific, flagging the misleading tweet as potentially containing misinformation about “health”, “politics” and “other”.
“We’re assessing if this is an effective approach so we’re starting small,” the San Francisco-based company said according to AFP.
“We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work.”
Twitter, like Facebook and YouTube, regularly comes under fire from critics who say it does not do enough to fight the spread of misinformation.
But the platform does not have the resources of its Silicon Valley neighbours, and so often relies on experimental techniques that are less expensive than recruiting armies of moderators.
Such efforts have ramped up as Twitter toughened its misinformation rules during the Covid-19 pandemic and during the US presidential election between Donald Trump and Joe Biden.
For example, Twitter began blocking users in March who have been warned five times about spreading false information about vaccines.
And the network began flagging tweets from Trump with a banner warning of their misleading content during his 2020 re-election campaign, before the then-president was finally banned from the website for posting incitements to violence and messages discrediting the election results.