Twitter will hide fake tweets from high profile accounts in times of crisis – TechCrunch

Twitter will hide fake tweets from high profile accounts in times of crisis – TechCrunch

In its ongoing effort to combat misinformation about breaking news, Twitter is rolling out a crisis misinformation policy to ensure that it does not amplify lies in times of widespread conflict.

To determine if a tweet is misleading, Twitter will require verification from credible public sources, including conflict monitoring groups, humanitarian organizations, open source investigators, journalists, and more. If the platform finds that the tweet is misleading, it will display a warning on the tweet, disable likes, retweets and shares, and link to more policy details. These tweets will also stop surfacing on the homepage, search, or explore.

Notably, Twitter will “retain this content for accountability purposes,” so it will remain online. Users will only have to click on the disclaimer to view the tweet. In the past, some warnings about elections or misinformation about COVID-19 were simply notices that appeared online below the tweet, rather than covering it entirely.

Picture credits: Twitter

Twitter says it will prioritize adding warnings to viral tweets or posts from high profile accounts, which may include verified users, state-affiliated media and government accounts. This strategy makes a lot of sense, as a tweet from a prominent personality is more likely to go viral than a tweet from an ordinary person with 50 followers – but it’s surprising that more platforms haven’t taken this approach already. .

Examples of tweets that may be flagged under this policy include false reports of events on the ground, misleading allegations of war crimes, atrocities or the use of weapons, and misinformation about the response of the international community, sanctions, defensive operations, etc. Personal anecdotes are not politics, nor are people’s strong opinions, comments or satire. Tweets that draw attention to a false claim in order to refute it are also allowed.

Twitter began working on a crisis disinformation framework last year alongside human rights organizations, it says. This policy may come into effect under circumstances such as public health emergencies or natural disasters, but to begin with, the platform will use these tactics to mitigate misinformation about international armed conflicts, in particular the ongoing Russian attack on the ‘Ukraine.

Most social networks struggled to moderate their content during the war in Ukraine, and Twitter is no exception. In one circumstance, Twitter took the decision to remove the Russian Embassy’s false claim that a pregnant victim of a bombing in Ukraine was an actor in the crisis. Twitter too suspended an account who spread a false conspiracy theory that the United States has biological weapons in Ukraine.

There seems to be a fine line between the type of content that would be removed entirely or the posts that would result in removal or a ban. This policy could have applied to the misleading tweet from the Russian Embassy, ​​for example, but when is an account so violent that it warrants a ban?

“Content moderation is more than just leaving or deleting content,” said Twitter’s Chief Security and Integrity Officer, Yoel Roth. wrote in a blog post. “We’ve found that not amplifying or recommending certain content, adding context via labels and, in severe cases, disabling engagement with Tweets, are effective ways to mitigate harm, while preserving speech and recordings of critical world events.”

Roth added in a thread that Twitter has discovered that not amplifying this content can reduce its spread by 30% to 50%.

But depending on whether Elon Musk’s $44 billion bid to buy Twitter materializes, those policies might not last long. Musk thinks content moderation should reflect state rules, AKA, the Twitter community guidelines essentially become the First Amendment without additional nuance. While this might be appealing to the types of people who never receive hate messages, this approach could undo many advances on Twitter, including efforts like this that stop the spread of harmful misinformation.

Even so, these policies are never 100% effective, and much of the content that violates the guidelines escapes detection anyway. This week, we came across several banned videos of the Buffalo gunman’s terrorist attack on platforms like Twitter and Facebook, which were left online for days without deletion. A video of the horrific shooting, which we sent directly to Twitter, still remains online.

So while these policies may be well-intentioned, they can only work as effectively if they are enforced.