In the run-up to Rwanda’s July 15 national elections, something strange was happening on X: Hundreds of accounts were working in unison to post identical or eerily similar messages in support of incumbent President Paul Kagame. A research team from Clemson University began tracking this automated network and discovered more than 460 accounts sharing what appeared to be AI-generated messages.
“The campaign displays several signs of coordinated corrupt behavior and appears to be an attempt to influence the debate about the performance of Kagame’s government,” the researchers wrote in a paper tracking the network.
It’s the kind of revelation that would normally send moderators into a panic, especially weeks before a national election, but when the group reported its findings to X, nothing happened: The flagged accounts remained, and the network continued posting.
“There has been no effort to remove it.”
It was a surprising result, given the sensitivity of the country’s elections and how easy it would have been to thwart the network. “It’s clear to me that if they paid any attention at all, they could take down some of these accounts. There’s been no effort at all to take these down.”
Wach’s experience was part of a larger shift in X’s moderation that opened the door to impact operations around the world. In the two years since Elon Musk became the company’s owner, the company’s trust and safety team has been devastated, resulting in a proliferation of networks like the one discovered in Rwanda. Just finding the networks has become harder than ever, as researchers must work without API access and face increased legal risks after publishing their work.
As X retreats from the fight against influence operations, the risks to democracy only grow. Eighty-two countries are scheduled to hold elections in 2024, including large nations like India, South Africa, and Mexico, but also small countries like Rwanda and Sri Lanka that moderators could easily overlook. X does not exert influence in all of these countries, but in those where it does, the platforms’ disinterest in moderation makes democracy alarmingly fragile.
Organizations that monitor global influence operations have already listed several ongoing campaigns, most of which are focused on regions where X exerts influence. In just the past six months, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) has documented influence campaigns aimed at discrediting protests in Georgia and spreading confusion over the death of an Egyptian economist, both run by fake X accounts. Chinese language spam bots continue to flood X’s sensitive search terms, effectively stifling and harassing dissident Chinese groups on the platform.
For Wach, his biggest concern is how many other election activities remain undiscovered: With so many elections happening around the world, countries with less global attention or less-known languages are likely to be overlooked.
“If they did this in Tagalog in the Philippines, I would never see it,” he said. “It wouldn’t have enough attention for researchers to even have a chance to encounter it.”
While no social media platform is immune to influencer campaigns, researchers say X’s rate of increase is significantly higher. Andy Carvin, managing director of the DFRLab, said much of the change can be attributed to specific decisions made by the platform under Musk’s leadership, specifically firing 80% of the platform’s trust and safety team. “If you compare X circa 2024 to the time of the 2020 US election, the trust and safety team that was there is gone,” Carvin told Rest of World.
Another change is a restructuring of X’s verification system. Initially, the idea was to highlight verified and notable accounts with a “blue check” badge. Musk revamped the system to make the badges available to anyone who paid a subscription fee, arguing that the requirement to pay the fee was proof that users were human and not bots. Instead, the easily obtainable badges made verified status on the platform much more accessible to organized campaigns.
“It’s shocking how easy it was to find accounts that appear to be bots spreading division over the UK vote.”
As a result, new campaigns targeting the election are springing up. Researchers at the human rights group Global Witness have been monitoring election hashtags since May, ahead of the UK election on July 4. More than 610,000 posts came from a network of “bot-like accounts” spreading conspiracy theories, xenophobia and other divisive topics.
Like the Clemson researchers, Global Witness informed X of its findings but only received an automated response. The account remains active on the platform, and has shifted to spreading misinformation about the US election and anti-immigrant protests in Ireland.
“It’s shocking how easy it was to find what appeared to be a bot account spreading division over the UK vote, and then how easy it was to see it quickly jump into the US political debate,” Ellen Judson, the group’s senior digital threat researcher, told Rest of the World.
When Musk first moved to acquire X, he highlighted the dire impact of bot accounts and promised to remove them as one of his first acts as owner. He appeared to make good on that promise when he announced a “system purge of bots and trolls” in April. (Later that day, the platform’s official safety account described the purge as “a significant, aggressive effort to remove accounts that violate our rules against platform manipulation and spam.”) But these efforts have been slow to bear fruit, and both spam and sockpuppet accounts remain common on the platform.
When researchers have been successful in taking down bot networks, it has primarily been through legal enforcement: In July, the US Department of Justice took action against 968 X accounts allegedly linked to Russian disinformation campaigns. The platform voluntarily suspended the accounts, but only after a court-ordered search and more than a month of filings and proceedings.
Renee DiResta, former research manager at the Stanford Internet Observatory, told Rest of World that many of the informal mechanisms for thwarting influence campaigns are now shut down.
“There have always been attempts to interfere in the politics of geopolitical rivals.”
“Previously, Twitter’s integrity team would engage with people outside the company to investigate and respond in a more proactive way,” DiResta said. “Right now, I don’t think Twitter is as interested in engaging in that way.”
Researchers also face growing obstacles to spotting influence manipulation in the first place. The easiest way to access X’s posts at scale is to use the platform’s API, which has become essential for researchers and academics. But where API access was once free to academics and research groups, it is now charged according to access tiers that can cost as much as $42,000 per month.
Under Musk’s leadership, the company has sued research groups that reported harmful activity on Facebook’s platform, and filed civil lawsuits against the Center for Countering Digital Hate and the Center for Media Affairs. The former lawsuit was dismissed, but the latter is scheduled for trial in April 2025. The lawsuits have had a chilling effect on the community as a whole, making many groups reluctant to conduct research on Facebook.
But while interest in tracking fraud has waned, there’s no sign that the fraud itself has slowed, a fact that has dire implications for the remaining elections in 2024. Dozens of elections will see dozens of different national contests, and if the platforms fail to crack down on fraud, influence operations will take advantage.
“There’s always been attempts to interfere in the politics of geopolitical rivals,” DiResta said. “We’ve seen this in every media ecosystem going back centuries. Now it’s happening on social media because people are on social media. So of course they’re going to continue to do this. Why not? There’s no downside.”