Dmitry Medvedev, a leading government official and former president of Russia, took to Twitter earlier this month to denigrate Ukraine in a post using language reminiscent of genocidal regimes.
And Twitter didn’t stop him.
In his 645-word tweet titled, “WHY WILL UKRAINE DISAPPEAR? BECAUSE NOBODY NEEDS IT,” Medvedev called Ukraine a “Nazi regime,” “blood-sucking parasites” and “a threadbare quilt, torn, shaggy, and greasy.”
The post garnered more than 7,000 retweets and 11,000 likes.
One response, though, asked Twitter CEO Elon Musk why he allowed Russian officials to broadcast tweets like this, especially when they used language often associated with genocide.
“All news is to some degree propaganda,” Musk responded. “Let people decide for themselves.”
Musk’s stance of allowing Russian government posts to pop up freely on people’s feeds has now become company procedure. And it’s a radical departure from the so-called “shadow bans” — or in Twitter parlance “visibility filtering rules” — that were previously placed on those accounts.
NPR has confirmed this was a deliberate decision from within the company.
The previous guardrails on government accounts in Russia, China and Iran have now been removed, according to two former Twitter employees who spoke to NPR on the condition of anonymity for fear of retribution.
“What I understand to have happened is, at Elon Musk’s direction, Twitter’s Trust and Safety Team, or what’s left of it, took a chainsaw to the visibility filtering rules,” said one of the former employees, who was an executive at the company.
The former executive said they learned this information by talking to current employees, former employees and people observing the situation.
Twitter applies “visibility filtering rules” to certain accounts to make sure less eyeballs see those accounts’ tweets.
In the past, the company’s Trust and Safety Team has used them for Russian government accounts and state-affiliated media accounts from countries “that limit access to free information.” Medvedev’s account was included in that cohort, according to the former employee.
Without visibility filtering, these accounts can now be more easily amplified and reach a much wider global audience. The implications could mean an increase of pro-government propaganda across Twitter and lead to real-life consequences for people who disagree with the authorities.
Taking the restraints off state-affiliated media accounts could also lead to more general disinformation on Twitter, said Sarah Cook, a senior advisor at the nonprofit Freedom House who researches China, Hong Kong and Taiwan and authored the report Beijing’s Global Megaphone.
“It’s not just about making the Chinese Communist Party look good. It’s not just about making activists or Hong Kong-ers look bad,” Cook said. “In some cases, it’s also about spreading disinformation about COVID or sowing divisions within Taiwan or the United States.”
Spike in engagement
Since Twitter removed the visibility filters on state-affiliated media, researchers have seen a sharp uptick in followers on many of these accounts.
The Atlantic Council, a U.S. think tank that focuses on international affairs, reviewed several accounts for NPR. In its findings, it recorded more followers and higher engagement on several government media accounts affiliated with Russia’s RT (Russia Today), China’s CGTN (China Global Television Network) and Iran’s PressTV. Those accounts saw a marked surge starting around March 29.
In the months prior, these accounts had been “hemorrhaging followers,” the Atlantic Council said.
While not all accounts experienced a spike, the fact that several major ones from three separate governments simultaneously saw rapid gains “strongly suggests a platform-wide algorithmic change.”
On April 6, several days after the uptick began, the group noticed the company’s “Platform Use Guidelines” were quietly amended to remove a sentence saying “Twitter will not recommend or amplify” state-affiliated media accounts.
“This whole episode shows what happens when we cede public debate to tech companies,” said Alyssa Kann, who works for the Atlantic Council’s Digital Forensic Research Lab and reviewed the government accounts.
“When we have the public square in what is effectively one man’s private billionaire playground, I think things like this can happen,” she said.
When NPR emailed Twitter for comment for this story, the company replied with its usual response to reporters—a poop emoji. Ella Irwin, Twitter’s vice president of product for trust and safety, didn’t respond to a request for comment.
Along with revoking the visibility filtering for government accounts, Twitter has also stopped its previous practice of letting researchers and developers freely access its data through a tool known as API. That means it’s far more difficult for watchdogs to keep checks on the spread of government propaganda on the platform.
The Atlantic Council said it’s now using third-party programs, but they’re not as comprehensive or reliable.
Because of all the changes at Twitter, it said much of the research into state media and other government actors that had once been commonplace just isn’t possible anymore.
“Until Twitter 2.0, this was kind of settled,” said Graham Brookie, the Atlantic Council’s vice president for tech programs. “And it was settled in a bunch of the places around the world where it really matters.”
How it started… how it’s going
When Twitter first launched its visibility filtering system for state-affiliated media accounts in 2020, the Trust and Safety Team consulted with various researchers and human rights groups. The way the filters worked was those accounts labeled state media wouldn’t be recommended or amplified.
For example, if someone wasn’t following RT and typed it into the search bar, the account wouldn’t show up.
After running a controlled experiment, Twitter found the reach of Russian state media tweets decreased by 30% with the filtering. When the company began filtering Russian government accounts at the start of the Ukraine war, it saw engagement per tweet decrease by 25%.
Similarly, a 2021 report by the China Media Project found at least a 20% drop in engagement with Chinese state media accounts.
While government officials in Russia, China and Iran have Twitter accounts, access to the site is banned in those three countries. That means ordinary citizens aren’t allowed to voice their opinions and experiences, which can create a lopsided flow of information where governments drown out regular people.
The visibility filtering was meant to fix some of that.
It’s unclear exactly when Twitter stopped the visibility filtering, but, like the Atlantic Council, Voice of America reporter Wenhao Ma first noticed it at the end of March. He did some experiments and found Twitter was automatically recommending to him Chinese state-affiliated media accounts that he wasn’t following.
Just a few days later, Twitter slapped the state-affiliated media label on NPR.
Twitter’s previous policy on state-affiliated media said news organizations that receive state funding but have editorial independence “like the BBC in the UK or NPR in the US” would not be labeled. (NPR gets less than 1% of its funding from the government).
What ensued was a chaotic few days.
In an email exchange, Musk told NPR reporter Bobby Allyn that maybe that label wasn’t accurate. Twitter then changed the label to “government-funded media” and also applied it to PBS and the BBC. As a result, NPR quit Twitter.
Around that time, Twitter adjusted its policy on how it defines “government-funded media” and linked to a Wikipedia page on public broadcasting as its source.
On Thursday night, that policy page disappeared from Twitter’s website. As did the labels from NPR, PBS and the BBC’s accounts.
Labels on Russian, Chinese and Iranian state-affiliated media also evaporated, along with the “Russia government official” label on Dmitry Medvedev’s account.
“We talked to experts and researchers,” said the former Twitter executive. “And now, these decisions get made because Elon Musk sees a tweet from Catturd and decides that’s what Twitter is going to be like.”
“It’s disheartening to see labels that were built to inform people be used as a tactic to mislead,” the former executive added.
Disclosure: This story was reported and written by NPR Tech Correspondent Dara Kerr and edited by Business Editor Lisa Lambert. NPR’s Shannon Bond contributed to this report. Under NPR’s protocol for reporting on itself, no corporate official or news executive reviewed this story before it was posted publicly.