Twitter To Ramp Up Censorship Of ‘Misinformation’ About The Ukraine War

Caitlin Johnstone
5 min readMay 20, 2022


Listen to a reading of this article:

Twitter has published what it calls a “crisis misinformation policy” announcing that it will be actively reducing the visibility of content found to be false which pertains to “situations of armed conflict, public health emergencies, and large-scale natural disasters.”

If you’ve been paying attention to the dramatic escalations in online censorship we’ve been seeing in 2022, it will not surprise you to learn that the Ukraine war is the first crisis to which this new censorship policy will be applied.

Twitter says that it “won’t amplify or recommend content” found to violate its new policy, and will also attach warning labels to individual tweets and even hide offending content behind a warning label and disable the retweet function on particularly naughty posts.

The problem here is of course the question of how to impartially establish whether something is objectively false without it turning into at best a flawed system guided by fallible human biases and perceptual filters and at worst a powerful institution shutting down unauthorized speech. Twitter says it formed its new policy with input from unnamed “global experts and human rights organizations,” and will be enforcing it with the help of “conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more.” This will come as no comfort to anyone who’s familiar with the history of propaganda peddling that can be found in every single one of those respective categories.

Twitter lists the following examples of the kind of content that will be found in violation of its crisis misinformation policy:

  • False coverage or event reporting, or information that mischaracterizes conditions on the ground as a conflict evolves;
  • False allegations regarding use of force, incursions on territorial sovereignty, or around the use of weapons;
  • Demonstrably false or misleading allegations of war crimes or mass atrocities against specific populations;
  • False information regarding international community response, sanctions, defensive actions, or humanitarian operations.

When Jack Dorsey resigned as Twitter CEO last November, I noted the warning signs we were seeing that his replacement, Parag Agrawal, supported the use of measures which make unauthorized content much less visible than authorized content without eliminating the unauthorized content altogether.

“There’s a lot of content out there,” Agrawal said in a 2020 interview. “A lot of tweets out there, not all of it gets attention, some subset of it gets attention. And so increasingly our role is moving towards how we recommend content and that sort of, is, is, a struggle that we’re working through in terms of how we make sure these recommendation systems that we’re building, how we direct people’s attention is leading to a healthy public conversation that is most participatory.”

This agenda to “direct people’s attention” toward “healthy public conversation” by controlling how content is “recommended” to viewers echoes the censorship-by-algorithm tactics we’ve seen employed by Facebook, Google, and by Google-owned YouTube. Google has been hiding dissident media in its search results for years, and in 2020 the CEO of Google’s parent company Alphabet admitted to algorithmically throttling World Socialist Website. Last year the CEO of YouTube acknowledged that the platform uses algorithms to elevate “authoritative sources” while suppressing “borderline content” not considered authoritative. Facebook spokeswoman Lauren Svensson said in 2018 that if the platform’s fact-checkers (including the state-funded establishment narrative management firm Atlantic Council) rule that a Facebook user has been posting false news, moderators will “dramatically reduce the distribution of all of their Page-level or domain-level content on Facebook.”

Twitter has generally been the most reluctant of the major platforms to exercise censorship on behalf of the empire, which is what has made it a better source of ideas and information than any other major platform. But now we’re seeing the most pernicious form of online censorship, censorship by manipulation of content visibility, take hold there as well.

Censorship by visibility manipulation is the most destructive form of online censorship that exists, because its consequences are both so much more far-reaching and so much less attention-grabbing than the controversial act of banning users from platforms or removing their posts. It’s a kind of censorship that people don’t even know is happening, and it’s happening all over the place.

It is deeply disturbing how Silicon Valley megacorporations have simply accepted that it is their job to help the US win a propaganda war against Russia, and how everyone’s just going along with that like it’s fine and normal. Our ability to share ideas and information on the platforms where most people congregate is being increasingly restricted, not on the basis of whether our speech is harmful, or even whether it is true, but on whether it helps or hinders the US propaganda campaign against Russia.

Silicon Valley censorship with the Ukraine war is an unprecedented escalation because they’re not pretending to be doing it to protect people from a virus or to safeguard elections or defend the public good in any way. It’s literally just “Well we can’t have people thinking wrong thoughts about a war,” without even really explaining why that’s important in any coherent and sensical fashion.

There’s no longer any pretense that the internet is being censored to protect the public interest. It’s just open censorship of information about a war, solely because they take it as a given that it’s their job to control the things people think and say about that war. They’re coming right out and saying yes, we are the platforms you come to in order to share ideas and information with your fellow humans, and yes, we are agents of the US empire. This is a dramatic escalation.

All this public hand wringing about misinformation and disinformation is itself disinformation. They’re not worried about the spread of disinformation, they’re worried about the spread of information. Your rulers are not concerned that you’ll start learning false things about Covid or the war in Ukraine, they are worried you’ll start learning true things about your rulers. That’s what all this fuss is really about.

They are locking down our minds and sanitizing our information ecosystem for the protection of the empire. I will keep saying this and saying this for as long as I am able: we’ve got to wake up and stop these bastards before it is too late.


My work is entirely reader-supported, so if you enjoyed this piece please consider sharing it around, following me on Facebook, Twitter, Soundcloud or YouTube, or throwing some money into my tip jar on Ko-fi, Patreon or Paypal. If you want to read more you can buy my books. The best way to make sure you see the stuff I publish is to subscribe to the mailing list for at my website or on Substack, which will get you an email notification for everything I publish. Everyone, racist platforms excluded, has my permission to republish, use or translate any part of this work (or anything else I’ve written) in any way they like free of charge. For more info on who I am, where I stand, and what I’m trying to do with this platform, click here. All works co-authored with my American husband Tim Foley.

Bitcoin donations:1Ac7PCQXoQoLA9Sh8fhAgiU3PHA2EX5Zm2