None of this should have come as a surprise to Musk, who tweeted that he was suspending the deal “pending details supporting the calculation that spam/fake accounts indeed represent less than 5% of users.” (He later said he was still committed to the $44 billion takeover, and some investors said they thought Musk was aiming for a lower price that wouldn’t weigh Tesla shares as heavily as Tesla did. he has pledges as loan security.)
Musk was referring to a Twitter regulatory filing this month that said fake or spam accounts made up less than 5% of its 229 million daily active users.
The number isn’t new: Twitter has given the same estimate for years, though critics and experts said they believe the company is downplaying the true number of such accounts.
Musk says he’ll ban Twitter spambots, but he’s been a beneficiary
“This 5% is a very timely and chosen measure,” said a former employee who spoke on condition of anonymity so as not to alienate a former employer. “They didn’t want it to be big, but not small either, because then they could be caught lying.”
Twitter declined to comment for this story. A person familiar with the acquisition negotiations, who spoke on condition of anonymity to describe sensitive topics, said the negotiations were proceeding as usual, despite Musk’s claims. The person said requests to know more about spam and fake accounts were common for a potential acquirer of a social media company.
Twitter’s history with spam dates back to its 2013 public offering, when it exposed the risk of automated accounts – a problem faced by all social media companies. (Facebook has also estimated that fake profiles make up about 5% of its user base.) For years, people out to manipulate public opinion have been able to purchase hundreds of fake accounts to inflate a celebrity’s notoriety or of a product.
But the problem took a serious turn in 2016, when Russian Internet Research Agency operatives spread disinformation about the election to millions of people in favor of then-presidential candidate Donald Trump on Twitter. , Facebook, YouTube and other platforms.
The Russia controversy, which culminated in congressional hearings in 2017, prompted Twitter to crack down. In 2018, the company launched an initiative called Healthy Conversations and was removing more than a million fake accounts a day from its platform, the Washington Post reported at the time.
To fix the problem internally, Twitter engineers launched an internal initiative called Operation Megaphone, in which they purchased hundreds of fake accounts and studied their behavior.
“You grab a species and find others that behave like that species,” said a person familiar with the internal effort, speaking on condition of anonymity to describe it freely. The person said they thought the 5% was probably an underestimate. “You make predictions based on what you have observed, but you don’t know what you don’t know.”
Critics have argued that Twitter has an incentive to downplay the number of fake accounts on its platform and that the bot problem is far worse than the company admits. The company also allows for some account automation, such as news aggregators that deliver articles on specific topics or weather reports at set times or hourly photo posts.
Elon Musk tweeted that the Twitter deal was temporarily suspended
Twitter does not include automated accounts in its calculations of daily active users because these accounts do not see advertising, and it argues that all social media services contain some amount of spam and fake accounts.
But the 5% number has long raised eyebrows among outside researchers who conduct in-depth behavioral studies on the platform around critical issues such as public health and politics.
“Whether it’s covid, or a lot of election studies in the United States and other countries, or around various movies, we’re seeing way more than that number of bots,” said Kathleen Carley, professor of computing at Carnegie Mellon which runs the Center for Computer analysis of social and organizational systems.
“Across all the different studies we’ve done collectively, the number of bots varies: we’ve seen as little as 5% and we’ve seen up to 35%.”
Carley said the proportion of bots tends to be much higher on topics where there’s a clear financial goal, like promoting a product or action, or a clear political goal, like electing a candidate. or encourage mistrust and division.
There are also different types of bots, including basic promotional spam, nation-state accounts, and business-to-rent boosters.
The rapid development of technology allows geopolitical forces to seem more human, peppering their comments with personal asides, and trying to manipulate the flow of group conversations and opinions.
As an example, Carley said some pro-Ukrainian bots are engaging with groups normally focused on other issues to try to build coalitions supporting Ukrainian goals. “The number of bot technologies has increased and the cost of creating a bot has decreased,” she said.
Outsiders said it was very difficult for them to produce a good estimate of bot traffic with the limited help Twitter provides for research efforts.
“When we use our Botometer tool to assess a group of accounts, the result is a spectrum ranging from very human to very bot-like,” said Kaicheng Yang, a PhD student at Indiana University.
“In between are the so-called cyborgs controlled by both humans and software. We will always confuse bots with humans and humans with bots, no matter where we draw the line.
Twitter is sweeping up fake accounts like never before, putting user growth at risk
Twitter allows some researchers access to a gigantic number of tweets, known within the company as the “fire hose” for its immense volume and speed. But even that lacks the clues that would make it easier to identify the bots, such as the email addresses and phone numbers associated with the accounts behind each tweet.
“Almost all efforts outside of Twitter to detect ‘botness’ are fatally flawed,” said Alex Stamos, the former Facebook security chief who heads the Stanford Internet Observatory.
Twitter itself isn’t doing all it can to track down and eliminate bots, two former employees told The Post. But two other former employees told the Post that after 2018 the company acted much more aggressively.
Some people have speculated that financial incentives cause Twitter not to find them. If the company identifies more bots and removes them, the number of “average monetizable daily users” would decrease, the amount it could charge for advertising would also decrease, and the stock price would follow, as it has. after Twitter confirmed a big elimination at La Poste in 2018.
The company uses a number of programs to find and block automated trading accounts, but they’re more effective at catching obvious spammers, such as those who register hundreds of new accounts on the same day from the same device, the companies said. former employees.
To produce its quarterly bot estimate, the company examines a sample of millions of tweets.
But that’s only a tiny percentage of the total, and they come from a wide range — not the hottest issues that get the most spam and the most viewer impressions.
“Honestly, they don’t know,” the former employee said. “There was significant resistance to any meaningful quantification.”
Twitter has protected itself legally with a disclaimer in its quarterly reports indicating it could be much further away.
“We exercised significant judgment, so our estimate of fake or spam accounts may not accurately represent the actual number of such accounts, and the actual number of fake or spam accounts may be higher than what we have. estimated,” Twitter said in its latest quarterly report. .