Bericht des US-Justizausschusses legt offen:

Ein vorläufiger Bericht des US-Justizausschusses legt mit hunderten Belegen die Praktiken der Internet-Zensur durch die EU-Kommission offen. Über Jahre zwangen Kommissare die Plattformen demnach zu Maßnahmen gegen vorgebliche „Desinformation“, versuchten sogar, die US-Präsidentschaftswahlen zu beeinflussen. Daneben auch Wahlen von Irland bis Rumänien.
Am Dienstag hat der Justizausschuss des US-Repräsentantenhauses einen gut 150-seitigen „vorläufigen Mitarbeiterbericht“ (Interim Staff Report) veröffentlicht, in dem die über Jahre aufgebaute Zensur-Infrastruktur der EU einer eingehenden Analyse und Kritik unterzogen wird. Im Laufe des vergangenen Jahres warnten die Abgeordneten immer wieder vor den EU-Zensurgesetzen, die auch die freie Rede von US-Bürgern einschränke. Im Februar 2025 forderte der Ausschuss interne Dokumente der Tech-Unternehmen zwangsweise an. Nun glauben die US-Abgeordneten handfeste Beweise für diese Vorwürfe zu haben.
Tausende Big-Tech-Dokumente bestätigen aus Sicht der Abgeordneten einen Eindruck: Die EU habe „eine erfolgreiche, zehn Jahre andauernde Kampagne zur Erlangung der globalen Kontrolle über die Online-Narrative geführt“. Der Weg zur Kontrolle führt dabei über die „Community-Regeln“ der Plattformen, die „die Grenzen für das, was auf dem globalen Marktplatz diskutiert werden kann“, bestimmen. Diese Regeln galt es folglich zu beeinflussen, wozu die EU-Kommission im Laufe der Jahre verschiedene Codes erließ und Foren gründete.
Anfangs war die Teilnahme freiwillig, aber intern wussten die Tech-Unternehmen: „wir haben eigentlich keine Wahl“. Insgesamt dürfte es zu hunderten Treffen und ebenso vielen Zensuranfragen an jedes einzelne Tech-Unternehmen gekommen sein.
In den Foren tagte zuvörderst die permanente EU-Taskforce, die Big-Tech-Vertreter hatten als Teil dieser Taskforce zu funktionieren. Hinzu traten weitere Akteure aus Industrie, „Zivilgesellschaft“ und sogenannte „Faktenchecker“, wie es in einer der internen E-Mails heißt. Die Marschrichtung der Diskussion wurde dabei stets von der EU-Kommission vorgegeben. Die Entscheidungen wurden dann durch angeblichen „Konsens“ getroffen – in Wahrheit aber unter dem Druck der Kommission, die spätestens seit der Einführung des Digital Services Act (DSA) ab 2022 ein scharfes Schwert gegen die Konzerne besaß: Sechs Prozent des globalen Jahresumsatzes können seitdem als Strafe gegen Facebook, X und Co. verhängt werden, wenn die Plattformen nicht so spielen, wie die EU es sich wünscht. Zum ersten Mal kam dieses Instrument gegen X zum Einsatz, das sich den Vorgaben verweigert.
Dabei ging es im Laufe der Jahre um ganz verschiedene Themen, etwa um Massenmigration, Männer im Frauensport oder die Behandlung mittelschwerer Krankheiten. So wandte sich im Oktober 2020 die Vizepräsidentin der Kommission und Kommissarin für „Werte und Transparenz“, Věra Jourová, ganz informell an die Plattformen. Jourová schrieb eine E-Mail an Microsoft, Facebook, Twitter, Bytedance (die Mutterfirma von TikTok) und Google. Darin äußerte die Kommissarin eine „freundliche Bitte“ um Informationen zur „Intensität der Kampagne gegen Covid-19-Impfungen“. Außerdem fragte sie nach etwaigen Regeländerungen der Plattformen. All das schrieb Jourová natürlich „in Kenntnis der Präsidentin“ und ohne Zweifel in von der Leyens Auftrag. Es ging also darum, Kritik an der „Impfung“ möglichst früh aus dem Weg zu räumen und – damals noch auf freiwilliger Basis – Unterdrückungs- und Zensurmaßnahmen zu ergreifen.
Der US-Hebel zur Änderung
Das Problem aus US-Sicht ist nun, dass dieselben Regeln, Richtlinien und Maßnahmen auch für amerikanische Bürger gelten und so faktisch den Ersten Verfassungszusatz außer Kraft setzen. Und hier liegt offenbar der Hebel für einen Einfluss der USA auf die Verhandlungen mit den Plattformen, vielleicht sogar auf die EU-Regelungen selbst. Der Senator Eric Schmitt schrieb auf X: „Extrem linke Eurokraten wollen, dass Social-Media-Unternehmen die Online-Rede von Amerikanern zensieren. Wir haben europäische Kontrolle über unsere freie Rede 1776 zurückgewiesen. Wir werden sie nicht 2026 zulassen.“
Der emeritierte Gesundheitsökonom und derzeitige Leiter der National Institutes of Health (NIH), Jay Bhattacharya, beglückwünschte die Ausschussmitglieder zu ihrer Arbeit: „Ausländische Regierungen sollten kein Veto gegen das Recht auf freie Meinungsäußerung in den USA haben.“
Dieser Hebel wird stärker, indem die Abgeordneten auch auf Versuche der Wahlbeeinflussung hinweisen – nicht nur in europäischen Staaten, sondern auch in den USA. So hätten „politische Amtsträger auf höchster Ebene der Europäischen Kommission“ TikTok dazu aufgerufen, US-amerikanische Inhalte vor den Präsidentschaftswahlen von 2024 „aggressiver zu zensieren“. Bekannt ist zudem, dass Thierry Breton vor dem Gespräch zwischen Elon Musk und Donald Trump auf X einen schrillen Warnbrieg verschickte, in dem er mit Vergeltungsmaßnahmen gemäß DSA drohte.
Parallel hatten andere Kommissionsmitglieder von den Plattformen gefordert, darzulegen, wie sie Beiträge zu den US-Wahlen „moderieren“ wollten. Wieder aktiv dabei: die damalige Vizepräsidentin für Werte und Transparenz, Věra Jourová, die solche Maßnahmen dezent als „Wahlvorbereitungen“ (election preparations) benannte, wofür sie sogar eigens nach Kalifornien reiste. Ein vermutlich beispielloses Ausgreifen der EU-Gewaltigen auf US-Territorium. Die Einmischung in die US-Wahl war offenbar mehr als nur ein Fehltritt des Irrläufers Breton.
Daneben gibt es aber Hinweise, dass auch größere Wahlgänge in europäischen und EU-Staaten zu Säuberungswellen in Online-Plattformen führten: „Richtlinien, Praktiken und Algorithmen“ wurden „aktualisiert und verfeinert“, Maßnahmen zur Reduktion von „Desinformation“ ergriffen, generative KI eingeschränkt, die vorgeblich „Desinformation“ bebildere. Nebenher wurden auch die links-dominierten „Faktenchecker“ losgeschickt, um Beiträge im Sinne von Regierungen zu „prüfen“. Auch gegen „gendered disinformation“ ging man vor, also Beiträge über Politikerinnen oder sexuelle Minderheiten.
Regelmäßig rief die Kommission zur Wahlbeeinflussung auf
Seit der DSA 2023 in Kraft getreten ist, habe die EU-Kommission demnach nationale Wahlen in acht Ländern beeinflusst, darunter die Slowakei, die Niederlande, Frankreich, Moldau (nicht EU-Mitglied), Rumänien und Irland, daneben natürlich auch die EU-Wahlen von 2024. So wurden etwa Aussagen wie „Es gibt nur zwei Geschlechter“ unterdrückt, die im Zeichen der Transgender-Ideologie politisch geworden waren. Daneben fanden die US-Abgeordneten auch Belege dafür, dass Vorwürfe der Wahlbeeinflussung – etwa durch Russland in Rumänien – erfunden waren: TikTok fand jedenfalls keine Beweise für das Gegenteil in seinem reichen Datenschatz. Das Vorgehen der EU gegen die rumänischen Wahlen 2024 gehörte bekanntlich zum Aggressivsten, das man von EU-Kommission gesehen hat.
Im Oktober 2023 wollte die Kommission in Den Haag die „Risikoeinschätzung und Abhilfemaßnahmen“ von TikTok angesichts der niederländischen Wahlen diskutieren. 2025 fand ein ähnliches Treffen statt. Sechs Wochen vor den Wahlen wurden Vertreter von Alphabet (Google), Meta (Facebook, Instagram), Microsoft, TikTok, X sowie Zensur-NGOs zu einem „Runden Tisch zu Wahlen im Kontext des Digital Services Act“ eingeladen. Wieder ging es um angeblich „schädliche Beiträge“, deren Reichweite eingeschränkt werden sollte, keineswegs um rechtswidrige. 2024 gab es dasselbe Verfahren in Frankreich und Moldau, 2024 und 2025 auch in Irland. Google betonte hier, wie es KI-Werkzeuge nutze, um „Falschinformationen“ herauszufiltern, Microsoft (Anbieter von LinkedIn und der Suchmaschine Bing) berichtete, dass man „Fehlinformationen“ beseitige und niederstufe (downranking). Auch Facebook/Meta bekannte sich zu neuen Einschränkungen der freien Rede, die man aber nicht spezifizierte. Ich bin damit einverstanden, dass mir Inhalte von Twitter angezeigt werden.
Die konkreten Beispiele für auf EU-Wunsch geänderte Leitlinien fehlen dabei nicht. Die chinesische Plattform TikTok änderte im Jahre 2023 ihre Leitlinien ausdrücklich, um „die Einhaltung des Digital Services Act sicherzustellen“. In den reformierten Leitlinien geht es etwa darum, dass bestimmte Arten von Inhalten nicht in Feeds gelangen sollen. Ihnen wird die „For You Feed (FYF) Eligibility“ entzogen. Sie sind also noch auffindbar, etwa auf dem Konto des Erstellers oder durch eine spezielle Suche, sie werden aber nicht von TikTok weiterempfohlen. Zu den so ausgegrenzten Inhalten gehören „mäßig schädliche Falschinformationen“ über die „Behandlung mäßig schwerer Krankheiten“ – gemeint scheint Corona oder das nächste beliebige „Pandemie“-Virus.
In Beiträgen dazu würden Dinge aus dem Zusammenhang gerissen, um die Nutzer über „Themen von öffentlicher Wichtigkeit“ irrezuführen. Außerdem würden wissenschaftliche Daten falsch dargestellt. Das kann natürlich immer passieren, aber ausdrücklich darauf hingewiesen wird eben nur bei diesem „Thema von öffentlicher Wichtigkeit“. Daneben sollen auch Beiträge mit „marginalisierender Sprache“, die „geschützte Gruppen“ herabwürdigen oder deren „ungleiche Behandlung“ normalisieren könnte, ausgeblendet werden. Aktuell heißt es in den TikTok-Eligibility-Standards: „In Krisen- oder Unruhezuständen können wir wiederholte Empfehlungen unterbrechen … damit Ihre Erfahrung sicher, abwechslungsreich und unterhaltsam bleibt.“
Erst wurden Corona-Kritiker demonetisiert, nun soll es Konservative treffen
Nachgezeichnet wird in dem Bericht des US-Justizausschusses die Entwicklung der Zensurmaßnahmen schon seit dem Jahr 2015. Im Jahr darauf wurden Plattformen wie Facebook, Instagram, TikTok und (damals noch) Twitter erstmals darauf eingeschworen, bestimmte Inhalte zu zensieren, nämlich das sogenannte „hasserfüllte Verhalten“ („hateful conduct“). 2018 wurde ein „Code of Practice on Disinformation“ beschlossen. Unter ihm wurden die Plattformen zur Zurückdrängung von „Desinformation“ angehalten – also bestimmter, teils wahrheitsgemäßer Informationen, die aber in gewissem Sinne als schädlich angesehen wurden. 2017 hatte Deutschland das Netzwerkdurchsetzungsgesetz erlassen und begann, wie der US-Report bemerkt, die „Durchsetzung von Zensurgesetzen auf nationaler Ebene“.
2022, als der DSA für große Online-Plattformen in Kraft trat, aktualisierte die Kommission ihren „Desinformations-Code“. Die bedeutenden Plattformen mussten nun wieder an den Treffen einer Taskforce teilnehmen, die sich regelmäßig traf, um die Zensurbemühungen der Plattformen zu diskutieren und neue Zensurwünsche zu äußern. Es gab sechs Untergruppen, die sich auf Themen wie „Faktenchecken“, Wahlen oder die „Demonetisierung konservativer Nachrichtenmedien“ konzentrierte. Das bedeutet: Konservativen Medien sollte die Möglichkeit entzogen werden, über Youtube, Facebook usw. für sich zu werben und so Geld zu verdienen. Schon 2021 hatte die Kommission dazu aufgerufen, Kritiker von Coronamaßnahmen und „Impfstoffen“ zu demonetisieren.
Im Hintergrund steht eine Erkenntnis der US-Repräsentanten: Die beteiligten Faktenchecker (wie NewsGuard oder der Global Disinformation Index) betrachten konservative Meinungsäußerungen regelmäßig als „Desinformation“, während links-progressive Äußerungen als „vertrauenerregend“ gelten. Später wurden auch Standpunkte zum Krieg in der Ukraine einem Netz von „wahr“ und „falsch“ unterworfen, das die US-Abgeordneten als „Pseudo-Wissenschaft“ bezeichnen. Auch in diesem Zusammenhang kommt immer wieder die Frage an die Plattformen: „Welche Maßnahmen haben Sie ergriffen, um Desinformation zur Krise zu reduzieren?“ Dabei konnte die EU lange auf die Zusammenarbeit der Biden-Harris-Regierung setzen. Ich bin damit einverstanden, dass mir Inhalte von Twitter angezeigt werden.
Allein von 2022 bis 2024 sollen mehr als 90 Treffen zwischen den Plattformen, Kommissionsvertretern und „zensurfreundlichen Organisationen der Zivilgesellschaft“ – im Original: „censorious civil society organizations (CSOs)“ – stattgefunden haben. Es ist klar, dass der Großteil der Plattformen den Forderungen der Kommission nach der Unterdrückung bestimmter Informationen, die als „schädlich“ angesehen wurden, nachkam.
2023 verließ Elon Musk mit seiner Plattform Twitter (später X) das EU-Zensur-Forum, das die Durchsetzung des DSA sicherstellen sollte. EU-Kommissar Thierry Breton drohte Konsequenzen an: „Die Verpflichtungen bleiben bestehen … Unsere Teams werden bereit sein, sie durchzusetzen.“
Handbuch über einen fragwürdigen Graubereich
Schon 2015 war das EU-Internetforum (EUIF) gegründet worden. Seine Aufgabe ist es laut Kommission, „Justiz- und Innenminister, Europol, Online-Plattformen, internationale Partner und Forscher“ zusammenzubringen, um „die gemeinsame Verantwortung für die Gewährleistung der Online-Sicherheit zu erörtern“. Der US-Justizausschuss übersetzt das als Vorgehen gegen „rechtmäßige, nicht gegen Gesetze verstoßende Äußerungen“, die sich etwa 2023 in einem „Handbuch“ des Internetforums niederschlug. Darin werden die folgenden Elemente als Zielobjekt der Zensoren benannt:
- „populistische Rhetorik“
- „regierungsfeindliche/EU-feindliche“ Inhalte,
- „elitenfeindliche“ Inhalte,
- „politische Satire“,
- „migrantenfeindliche und islamfeindliche Inhalte“,
- „flüchtlings-/einwandererfeindliche Inhalte“,
- „LGBTIQ…-feindliche Inhalte“ und
- „Meme-Subkultur“
Gemeint ist das EU-Handbuch „Borderline Content. Understanding the Gray Zone“, in dem die Beispiele aus der Liste als „borderline violative“, also „grenzwertig regelwidrig“, bezeichnet werden (Seiten 3 f.). Das Handbuch postuliert damit die Existenz eines „Graubereichs“ zwischen eindeutig legalen und „regelkonformen“ Äußerungen im Internet und dem, was die Eurokraten in verräterischer Abkürzeritis meist nur als TVEC bezeichnen: das ist „Terrorist & Violent Extremist Content“, „terroristische und gewaltsame extremistische Inhalte“, die hier als Nordstern dienen. Um sie geht es den Eurokraten aber nicht in ihrem „Borderline“-Handbuch, sondern um jene Äußerungen, die vollkommen legal sind, aber als schädlich eingeordnet werden.
Mit Verweis auf „Akademiker und Forscher“ wird behauptet, dass Inhalte, die „normalerweise, in einem demokratischen Umfeld von Parametern der freien Meinungsäußerung geschützt“ werden, in öffentlichen Foren unangemessen und folglich „borderline illegal“ seien. Eine andere schmissige Formulierung lautet: „lawful but awful“. Erkennbar wird so aber höchstens, dass es den genannten „Akademikern und Forschern“ an Durchblick und an Theoriebildung mangelt. Ein Verhalten, das offline legal und geschützt ist, muss dies auch im Online-Raum sein.
DSA: Ein notwendiges Gesetz für die EU-Großen
Gemeint ist also wirklich eine Art verpflichtende, mit Zwang gegen die Plattformen durchgesetzte Online-Etikette, in der geltendes Recht und Grundrechte ausgesetzt werden, weil gewisse Verhaltensformen „unangemessen“ seien. Und über die Angemessenheit oder
nicht entscheiden freilich die EU-Mächtigen, denen mit dem DSA endlich ein wirksames Zwangsmittel gegen die Plattformen zur Verfügung steht. Das ist die Erklärung für die innere ‚Notwendigkeit‘ dieses EU-Gesetzes gemäß EU-Logik.
Die vom US-Justizausschuss ausgewählten Beispiele reichen aus, um die Absurdität des Vorhabens aufzuzeigen, das nichtsdestotrotz bereits umgesetzt wird: Bestimmte, angeblich populistische Äußerungsformen, Regierungskritik, EU-Kritik, „Elitenkritik“, politische Satire, Migrationskritik, Kritik an der LGBT-Dogmatik und „Meme-Subkultur“ sind schon lange in verschiedenen Ländern zum Zielobjekt von Gesinnungspolizeien geworden, die teils staatlich organisiert sind und sich teils auf „Zivilgesellschaftsorganisationen“ – wie die berüchtigte „Hate Aid“ – stützen.
Der Raum des Illegalen wird ausgedehnt – über das Maß hinaus, das die Gesetze, Rechte und Freiheiten in den EU-Mitgliedsstaaten vorgeben. An sich ist das ein Skandal erster Güte, den der US-Justizausschuss hier benennt, der aber in der Öffentlichkeit der europäischen Gesellschaften noch kaum angekommen ist.
Nachtrag 1 =
Zensur, Überwachung und Wahlmanipulationen: Die EU mischt sich immer eindringlicher in die Lebensbereiche ihrer Bürger ein. Was einst als Wirtschafts- und Friedensprojekt begann, ist längst zu einem faschistoiden Unterdrückungsapparat mutiert.
Der vom Justizausschuss des US-Repräsentantenhauses vorgelegte zweite Teil seines Untersuchungsberichts über die Frage, „inwieweit ausländische Gesetze, Vorschriften und gerichtliche Anordnungen Unternehmen dazu zwingen, nötigen oder beeinflussen, Äußerungen in den Vereinigten Staaten zu zensieren“, hat alle Anschuldigungen bestätigt, die die Trump-Administration seit einem Jahr vor allem gegen die übergriffige und immer freiheits- und grundrechtsfeindlicher agierende Europäische Union erhebt. Das Fazit der Untersuchung lautet, dass die EU „in einer umfassenden, zehnjährigen Initiative erfolgreich Druck auf Social-Media-Plattformen ausgeübt, ihre globalen Regeln zur Moderation von Inhalten zu ändern, wodurch sie direkt in die Online-Meinungsäußerung der Amerikaner in den Vereinigten Staaten eingegriffen hat“. Obwohl dies oft als Bekämpfung sogenannter „Hassrede” oder „Desinformation” dargestellt werde, habe die Europäische Kommission daran gearbeitet, „wahre Informationen und politische Äußerungen zu einigen der wichtigsten politischen Debatten der jüngeren Geschichte zu zensieren – darunter die COVID-19-Pandemie, Massenmigration und Transgender-Themen“.
Binnen zehn Jahren habe die EU inzwischen ein bedrohliches Maß an Kontrolle über die globale Online-Meinungsäußerung erlangt, die nunmehr ausreicht „um Narrative, die die Macht der Europäischen Kommission bedrohen, umfassend zu unterdrücken“, wird weiter festgehalten. Der Digital Services Act (DSA) der EU markiere dabei „den Höhepunkt jahrzehntelanger Bemühungen Europas, politische Opposition zum Schweigen zu bringen und Online-Narrative zu unterdrücken, die das politische Establishment kritisieren“, stellt der Bericht klar. Das seit rund 30 Jahren als Massenphänomen verbreitete Internet und die seit 20 Jahren wachsenden sozialen Medien hätten eigentlich zunächst verheißen, zu einer Kraft zu werden, die die Meinungsfreiheit und damit auch die politische Macht demokratisieren würde.
Immer mehr Druck auf Plattformen ausgeübt
Diese Entwicklung habe jedoch in zunehmendem Maße die etablierte politische Ordnung eines seither überall im Westen zunehmend an die Machthebel gelangten und die Institutionen durchsetzenden linken Kartells bedroht (vor allem in Deutschland lässt sich dies bemerken); ab Mitte der 2010er Jahre hätten die politischen Eliten zunächst in den USA und dann in Europa versucht, „den aufkommenden populistischen Bewegungen entgegenzuwirken, die zutiefst unpopuläre Politiken wie die Massenmigration in Frage stellten“.
In der Erkenntnis, dass die Bewältigung dieses “Problems” mehrere Jahre dauern würde, habe die Europäische Kommission ab 2015 mit der Einrichtung verschiedener Foren begonnen, in denen europäische Regulierungsbehörden direkt mit Technologieplattformen zusammentreffen konnten, um zu diskutieren, wie und welche Inhalte “moderiert” – im Sinne von reguliert und zensiert – werden sollten. Obwohl dies vorgeblich zur Bekämpfung von „Fehlinformationen” und „Hassreden” gedacht gewesen sei, hätten nichtöffentliche Dokumente, die dem Ausschuss vorgelegt worden seien, gezeigt, dass die Europäische Kommission in den letzten zehn Jahren direkt Druck auf Plattformen ausgeübt habe, um rechtmäßige politische Äußerungen in der Europäischen Union und im Ausland zu zensieren. Das 2015 von der Generaldirektion Migration und Inneres (“GD Home”) der Europäischen Kommission gegründete EU-Internetforum (EUIF) habe dann 2023 estmals das EUIF veröffentlicht, ein Handbuch für Technologieunternehmen zur “Moderation” rechtmäßiger, nicht gegen Vorschriften verstoßender Äußerungen.
Manipulierte Europawahlen
Die Enthüllungen des Berichts, auf den hierzulande von Regierungsmedien inhaltlich bezeichnenderweise fast gar nicht eingegangen, sondern der wieder einmal als absurde US-Einmischung und trumpistische Verleumdung der “hochmoralischen” EU gerahmt wird, lassen aus Sicht von manchen Dissidenten und Juristen nur die Schlussfolgerung zu, dass die EU inzwischen teilweise als eine kriminelle Verschwörung gegen Freiheit und Grundrechte eingestuft werden kann. Man muss auch hier der Trump-Regierung und den USA dankbar sein, dass sie Europa bei diesen diesseits des Atlantiks systematisch verschweigenden, verleugneten und als “rechte Verschwörungstheorien” bekämpften Tatsachendarstellungen den Spiegel vorhält und die Augen öffnet. Doch der Justizausschuss des US-Repräsentantenhauses hat in ihrem Bericht noch weitere politische Anmaßungen und schamlose Manipulationen der Brüsseler Eurokratie und der politischen Eliten angeprangert: Er dokumentiert, dass die EU in den letzten Jahren nicht weniger als acht (!) Europawahlen manipuliert hat. Dies betrifft folgende Mitgliedsstaaten:
- Slowakei (2023)
- Niederlande (2023 und 2025)
- Frankreich (2024)
- Rumänien (2024)
- Moldawien (2024)
- Irland (2024 und 2025.
Unreformierbarer Moloch EU
“Das sind die Leute, die 24/7 von ‚unsere Demokratie‘, von ‚Freiheit‘ und ‚liberalen Werten‘ quatschen”, kommentiert Tatjana Festerling. Sie stellt weiter fest: “Jetzt, wo die Weltöffentlichkeit sieht, dass sich Europa unter der Knute einer von Macht besessenen Bande in Brüssel in ein runtergerocktes, verarmendes, islamisiertes, totalitäres Shithole mit täglicher Gewalt auf den Straßen verwandelt hat, bleibt zumindest zu hoffen, dass sie die anstehenden Wahlen in Ungarn und Bulgarien nicht mehr ganz so offensichtlich beeinflussen können. Wer will schon Partner und Investor einer EU sein, in der sich das Böse an die Macht geputscht hat und diese durch willkürliche Regeln und Gesetze unberechenbar absichern wird?”
Tatsache ist: diese EU ist nicht mehr reformierter. Ein einiges und partnerschaftliches Europa der Vaterländer, wie es ursprünglich angedacht war, ist das genaue Gegenteil des Molochs, der hier zur Durchsetzung einer agendagetriebenen, supranationalen Interessenpolitik errichtet wurde. Immerhin: Wenn durch die US-Enthüllungen – wohlgemerkt vom dortigen Parlament, nicht vom “bösen Trump” zusammengetragen und fundiert untermauert – weitere Austrittsbestrebungen (“Exits”) Auftrieb erhalten sollten, wäre das wünschenswert. Diese EU ist nicht reformierter; sie muss zerschlagen werden – damit die europäische Idee noch einmal neu Gestalt annehmen kann. Diesmal dann aber als an den Menschen, den Bürgern orientiertes Projekt, nicht als Spielball degenerierter Eliten.
Nachtrag 2 vom 05/02/26 =
Wolfgang Wiehle

+++ EU greift Meinungsfreiheit noch tiefer an: US-Bericht offenbart schockierende Wahrheit! +++
Ein brisanter Bericht des Justizausschusses im US-Repräsentantenhaus erschüttert das demokratische Selbstverständnis Europas. Unter dem Titel „The EU Censorship Files“ wirft das Gremium der Europäischen Kommission sowie einzelnen Regierungen – darunter auch der deutschen – eine gezielte Manipulation der öffentlichen Meinung über digitale Plattformen vor. Demnach sollen über einen Zeitraum von zehn Jahren hinweg Konzerne wie TikTok, Meta oder X (vormals Twitter) unter Druck gesetzt worden sein, Inhalte zu löschen oder algorithmisch zu unterdrücken – nicht etwa wegen gesetzeswidriger Inhalte, sondern weil diese konservative Positionen vertreten. Selbst wahre Aussagen wie „Es gibt zwei Geschlechter“ wurden als zu sperrende Inhalte klassifiziert. Der Irrsinn kennt keine Grenzen mehr. Elon Musk bezeichnete die Enthüllungen als „Wow“-Moment und trifft damit den Kern der Empörung.
Laut Report geschah dies nicht nur im europäischen Kontext. Vielmehr reichte der Einfluss europäischer Zensurgesetze bis tief in die Vereinigten Staaten. Plattformen zensierten Informationen, die durch den Ersten Verfassungszusatz der USA eigentlich geschützt wären, aus Angst vor Repressionen seitens der EU. Damit, so die US-Kritik, wird die Meinungsfreiheit nicht nur auf unserem Kontinent untergraben, sondern die EU exportiert ein repressives Modell in andere Demokratien. Die Amerikaner warnen nun: Wer solche Praktiken duldet, stellt sich außerhalb freiheitlich-demokratischer Grundwerte. Besonders brisant: Laut Bericht traf sich die Kommission vor acht Wahlen in sechs europäischen Ländern gezielt mit Plattformbetreibern, um die Verbreitung politischer Aussagen unmittelbar vor Wahlterminen zu unterdrücken – ein ungeheuerlicher direkter Angriff auf das demokratische Prinzip der freien Meinungsbildung.
Jetzt geht es also auch noch um das Eingreifen in die Wahlkämpfe und damit den (neuen) Gipfel der Zensur – der (machen wir uns nichts vor!) auch die deutsche Opposition zu treffen droht, wenn sie vor einem Wahlsieg steht. Der Justizausschuss des US-Repräsentantenhauses bestätigt, dass die EU in folgende europäische Wahlen eingegriffen hat: Slowakei (2023), Niederlande (2023 und 2025), Frankreich (2024), Rumänien (2024) Moldawien (2024), Irland (2024 und 2025). Wo kommen wir hin, wenn die Zensurgelüste Brüssels völlig über die Stränge schlagen? Dabei trifft diese Entwicklung auf fruchtbaren Boden, denn in Deutschland herrschen sowieso schon eingeschränkte Debattenräume, Cancel Culture und die Diffamierung abweichender Meinungen.
Für die AfD ist klar: Eine solche Form von Meinungskontrolle und die Einmischung in Wahlkämpfe bis zum Manipulieren ganzer Wahlen ist mit einem freien demokratischen Rechtsstaat nicht vereinbar. Alle Kommunikationsprotokolle zwischen Bundesregierung, EU und Tech-Konzernen müssen nicht nur offengelegt werden, sondern es müssen auch gesetzliche Garantien zur Sicherung echter Meinungsfreiheit geschaffen werden. Die AfD wird das Recht auf freie Rede mit aller Entschiedenheit verteidigen – gegen Brüssel, gegen Berlin, und vor allem gegen jene, die glauben, ihre politische Ideologie über den Willen des Volkes stellen zu dürfen. Wo Meinung nicht mehr frei ist, ist auch Demokratie nur noch Fassade. Das muss sich ändern und dafür steht die AfD!
Nachtrag 3 vom 07/02/26 =
https://reitschuster.de/post/usa-schlagen-alarm-jahrzehnt-der-zensur-in-europa
USA schlagen Alarm: „Jahrzehnt der Zensur in Europa“ Bericht aus Washington mit gesellschaftlichem Sprengstoff

Von Kai Rebmann
Die Vorwürfe sind nicht ganz neu, doch selten zuvor wurden sie derart massiv geäußert wie jetzt – und dabei so gut durch Fakten unterlegt. Seit mindestens zehn Jahren soll die EU eine Kampagne zur systematischen Unterbindung von Meinungsfreiheit und Demokratie in Europa fahren. Der Instrumentenkasten reicht dabei von der Ausblendung simpler biologischer Fakten über Zensur sozialer Medien bis hin zur Einmischung in Wahlen in mindestens sechs Ländern, unter anderem in für den Fortbestand der EU existenziell wichtigen Staaten.
Zunächst sei „erfolgreich Druck auf Social-Media-Plattformen ausgeübt“ worden, „um wahre Informationen in den Vereinigten Staaten zu zensieren.“ Dann habe es „gezielte Zensur von politischen Inhalten in den USA“ gegeben. Und schließlich sei sich „in Wahlen in ganz Europa eingemischt“ worden.
Diese Aussagen bilden das Fundament eines aktuellen Reports des Justizausschusses im US-Repräsentantenhaus, der innerhalb des Weißen Hauses aber offenbar schon länger kursiert. So ist dann wohl auch die Brandrede von US-Vizepräsident J.D. Vance aus dem vergangenen Jahr zu erklären, in der dieser die Zustände in Europa – und insbesondere auch Deutschland – in Bezug auf Meinungsfreiheit und Demokratie auf das Schärfste kritisierte.
Wahlen in Europa unter Beobachtung
Und auch im jetzt veröffentlichten Bericht spielen die Bundesregierung bzw. deren Vorgängerinnen eine sehr unrühmliche Rolle. Berlin habe im „Jahrzehnt der europäischen Zensur“ eine tragende Rolle gespielt. Und dies beschränke sich nicht „nur“ auf die Zensur der freien Meinung und das Ausüben politischen Drucks auf Social-Media-Plattformen, sondern ausdrücklich auch auf die Manipulation von acht Wahlen in sechs europäischen Ländern, die größtenteils auch EU-Mitglieder sind.
Konkret genannt werden Irland (Wahlen 2024 und 2025), die Niederlande (2023 und 2025), Frankreich, die Slowakei, Rumänien und Moldau. Ja, richtig, es handelt sich dabei um eben solche Länder, in denen die vermeintlich „falschen“ Kandidaten entweder gewonnen haben oder denen vor den entsprechenden Wahlen zumindest gute Chancen eingeräumt wurden.
Besonders deutlich wurde das am Beispiel Rumänien. Nach dem ersten Urnengang am 24. November 2024 zur Präsidentschaftswahl lag dort der parteilose und als prorussisch geltende Kandidat Călin Georgescu in Führung. Nicht zuletzt massiver Druck aus Brüssel veranlasste den Verfassungsgerichtshof am 6. Dezember 2024 dazu, diesen Wahlgang zu annullieren – nachdem dieselbe Instanz diesen am 28. November 2024 noch für rechtmäßig erklärt hatte. Die Wahl sei aus Moskau beeinflusst worden, zudem sei es auf Tiktok und in anderen sozialen Medien zu einer Bevorteilung Georgescus gekommen, so die bis heute durch nichts belegten Vorwürfe. Am 26. Februar 2025 wurde Georgescu zunächst verhaftet, zwei Wochen später folgte dann der endgültige Ausschluss des aussichtsreichen Kandidaten von der Wahlwiederholung.
Und wie Wahlen manipuliert, verfälscht oder wie in Rumänien notfalls eben auch „rückgängig“ gemacht werden, das weiß die politische Elite in kaum einem europäischen Land besser als in Deutschland – sei es nun Thomas Kemmerich, der nach allen demokratischen Gepflogenheiten ins Amt gewählte und kurz darauf aus eben diesem wieder herausgejagte Ex-Ministerpräsident Thüringens, oder AfD-Kandidaten, die aus fadenscheinigen Gründen gar nicht erst zur Wahl zugelassen wurden und werden. Solche Vorgänge wären in Deutschland – und den meisten anderen Ländern Europas, mindestens aber der EU – bis vor wenigen Jahren noch undenkbar gewesen. Inzwischen sind sie eher die traurige Regel als die Ausnahme.
Das eigentlich Gefährliche daran ist aber etwas anderes: Es regt sich kaum noch jemand über solch durchsichtige Praktiken auf, am allerwenigsten die Medien! Ganz im Gegenteil ist es ausgerechnet die sogenannte „vierte Gewalt“, die dem Establishment regelmäßig als Steigbügelhalter dient. Verunglimpft und an den Pranger gestellt werden nicht diejenigen, die den Schmutz verursachen, sondern diejenigen, die auf den Schmutz hinweisen.
EU droht Social Media mit Millionen-Strafen und Verboten
Der lange Arm der Zensurmaschine in Brüssel reicht unterdessen weit über die Politik und Europa hinaus und macht selbst vor den USA nicht Halt. Der Bericht des Justizausschusses hält zum Beispiel fest: „Aufgrund der europäischen Zensurgesetze zensiert Tiktok in den Vereinigten Staaten wahre Informationen.“ Dieser Satz bezieht sich konkret auf die vermeintlich „rechte These“ der Zweigeschlechtlichkeit. Was bis vor nicht allzu langer Zeit noch als schiere Selbstverständlichkeit galt und sprichwörtlich jedes Kind wusste, ist heute zu einer „These“ geworden. Zu etwas also, über das sich – natürlich ergebnisoffen – debattieren ließe.
Klar ist, dass die spätestens Mitte der 2010er-Jahre gestartete Zensurkampagne der EU durch die Einführung des Digital Services Acts noch einmal richtig an Fahrt aufgenommen hat. Ein erster Höhepunkt wurde freilich schon davor erreicht, namentlich zu Beginn der Corona-Krise. Jeder Zweifel an den offiziellen Versionen zum Ursprung des Virus und/oder dem vermeintlichen Nutzen des sogenannten Impfstoffes wurden als Verschwörungstheorie gebrandmarkt. In der Folge sahen sich die Betreiber von sozialen Medien mit massiven Zensurzwängen konfrontiert, die nicht zuletzt aus Brüssel kamen.
Aber auch knapp sechs Jahre danach wird die Meinungsfreiheit im Netz kleingehalten. So berichtet der Justizausschuss des US-Repräsentantenhauses ganz aktuell über eine „geheime Entscheidung“ der EU-Kommission, wonach „X wegen der Verteidigung der Meinungsfreiheit mit einer Geldstrafe von 140 Millionen Euro belegt und mit einem Verbot von X in der EU gedroht“ worden sei.
Es waren und sind genau solche Maßnahmen, derer sich die EU zunächst bedient, um danach in offiziellen Versionen von „Freiwilligkeit“ und einem „Konsens“ zu schwadronieren, auf denen entsprechende Vereinbarungen mit den Social-Media-Kampagnen über die Pflicht zur Löschung bestimmter Beiträge beruhe. Der Bericht aus den USA bezeichnet dies als „wichtigen Druckpunkt, um Inhalte in großem Umfang zu zensieren.“
Bleibt noch die Frage, was die USA interessiert, wie es um Meinungsfreiheit und Demokratie in Europa bestellt ist. Die Antworten finden sich im Report des Justizausschusses und lesen sich beispielsweise so: „Wenn Regierungen Druck auf (Social-Media-)Plattformen ausüben, ihre Community-Richtlinien zu ändern, dann ändern sie damit das, was Amerikaner in den Vereinigten Staaten und anderswo posten dürfen.“ Oder: „Seit mehr als einem Jahr warnt der Ausschuss davor, dass europäische Zensurgesetze die freie Meinungsäußerung in den USA im Internet bedrohen. […] Die großen Tech-Konzerne zensieren die Meinungsfreiheit von Amerikanern in den USA, einschließlich wahrer Informationen, um dem weitreichenden europäischen Gesetz zu digitalen Diensten nachzukommen.“
Nachtrag 3 vom 10/02/26 =
Enthüllungen aus den USA:
Wie die Europäische Union die Impfpropaganda steuerte und manipulierte
Der zweite Teil des Berichts des Justizausschusses des US-Repräsentantenhauses enthüllt in schockierendem Ausmaß, wie die Europäische Kommission während der Covid-19-Pandemie die öffentliche Debatte beeinflusste, Druck auf amerikanische Technologieunternehmen ausübte und systematisch unbequeme Meinungen zur Impfung unterdrückte.

koordinierte Zensur, Druck hinter den Kulissen sowie den Versuch, Kritiker zum Schweigen zu bringen – lange bevor Impfstoffe überhaupt verfügbar waren.
Eine Untersuchung, die das wahre Ausmaß des Problems offenlegte
Der zweite Teil des Berichts des Justizausschusses des US-Repräsentantenhauses konzentrierte sich darauf, „in welchem Ausmaß ausländische Gesetze, Vorschriften und Gerichtsentscheidungen Unternehmen dazu zwingen oder beeinflussen, Äußerungen in den Vereinigten Staaten zu zensieren“. Die Ergebnisse dieser Untersuchung zeigen nach Angaben der Autoren eindeutig, dass die Europäische Kommission ein umfangreiches Zensursystem betrieb, das sich nicht nur auf europäische Bürger beschränkte, sondern auch in das Funktionieren amerikanischer Unternehmen eingriff.
Der Bericht beschreibt eine Situation, in der Technologieunternehmen unter Druck gesetzt wurden, sich strengen und repressiven EU-Vorschriften zu unterwerfen. Andernfalls drohten ihnen „drakonische Geldstrafen“. Auf diese Weise gelang es der Europäischen Kommission, ihre Forderungen auch außerhalb des Territoriums der Union durchzusetzen.
Gezielte Zensur von Impfkritikern
Die vom Ausschuss geprüften Dokumente zeigen, dass die Europäische Kommission während der Covid-19-Pandemie als zentrales Organ für Zensur und Kontrolle fungierte. Ihr Ziel war es, eine einzige „richtige“ Auslegung von Impfungen und weiteren Maßnahmen durchzusetzen und sicherzustellen, dass nur genehmigte Narrative in den öffentlichen Raum gelangten.
Alles, was von dieser offiziellen Linie abwich – also andere Meinungen, Warnungen von Fachleuten oder kritische Fragen –, sollte den Erkenntnissen zufolge systematisch unterdrückt werden. Es ging also nicht nur um den Kampf gegen Unwahrheiten, sondern um den Versuch, jede abweichende Debatte zu eliminieren.
Geheime Verhandlungen mit amerikanischen Technologieunternehmen
Der Bericht beschreibt auch Gespräche hinter den Kulissen und geheime Korrespondenz zwischen Vertretern der Europäischen Kommission und amerikanischen Technologieriesen. Ein typisches Beispiel ist die Kommunikation vom 30. Oktober 2020, als Brüssel Informationen darüber verlangte, wie die Unternehmen gegen „Desinformationen“ über Covid-Impfstoffe vorgehen wollten – zu einem Zeitpunkt, als diese Impfstoffe noch nicht einmal auf dem Markt waren.
Ziel war es festzustellen, „wo wir derzeit in Bezug auf die Intensität der Kampagne gegen die Covid-Impfung stehen“ und wie sich die Situation weiterentwickeln könnte. Auf Grundlage dieser Informationen sollte eine „proaktive“ Kommunikationsstrategie vorbereitet und zugleich Unterstützung für die EU-Mitgliedstaaten bereitgestellt werden.
Koordination von oben und die Rolle der Führung der Europäischen Kommission
Zu den Forderungen gehörte auch die Mitteilung des aktuellen Stands der Regeln für Nutzer der Plattformen sowie der Art und Weise der Moderation von Diskussionen. Aus Brüssel kamen dabei Zusicherungen, dass diese Informationen ausschließlich zur Vorbereitung „geeigneter Maßnahmen“ verwendet und nicht mit der „breiten Öffentlichkeit“ geteilt würden.
Aufgrund der angeblichen „Dringlichkeit“ der gesamten Angelegenheit wurde die Kommunikation über gewöhnliche E-Mails geführt. Die Europäische Kommission betonte zugleich, dass sie die volle Unterstützung der damaligen Vizepräsidentin der Kommission, Věra Jourová, habe, die mit Wissen der Kommissionspräsidentin Ursula von der Leyen handelte.
Nach Ansicht der Autoren des Berichts zeigt gerade dieses Dokument klar, wie die Europäische Kommission den gesamten Covid-Diskurs steuerte und die öffentliche Debatte systematisch in die gewünschte Richtung lenkte.
Zum Schweigen gebrachte kritische Stimmen und Druck auf Plattformen
Jeder, der sich nicht an das offiziell genehmigte Narrativ hielt, sollte zum Schweigen gebracht werden. Online-Plattformen spielten in diesem Prozess die Rolle williger Helfer, indem sie ihre Nutzungsbedingungen anpassten und aktiv kritische Beiträge und Konten entfernten.
Der Bericht erinnert zugleich daran, dass die Versuche, Technologieunternehmen zur Zusammenarbeit zu zwingen, nicht erst mit der Pandemie begannen. Bereits lange vor Covid versuchte die Europäische Kommission durch informelle Absprachen und Druck, Plattformen dazu zu bewegen, ihre Anforderungen zu erfüllen.
„Freiwillige“ Verpflichtungen als hartes weiches Recht
Durch dieses Vorgehen ersparte sich die Europäische Kommission die Notwendigkeit, offizielle Gesetze zu verabschieden. Stattdessen setzte sie angeblich „freiwillige Selbstverpflichtungen“ der Technologieunternehmen durch. Diese ermöglichten in der Praxis jedoch wesentlich härtere Eingriffe in die Meinungsfreiheit, als dies über den Weg der normalen Gesetzgebung möglich gewesen wäre.
Laut dem Bericht handelte es sich in dieser Phase um eine Art „hartes Soft-Law“. In den letzten Jahren hat die EU jedoch selbst diese Zurückhaltung aufgegeben und mit Instrumenten wie dem Digital Services Act ein beispielloses Kontrollsystem geschaffen, das nach Ansicht von Kritikern eine ernste Bedrohung für die Meinungsfreiheit darstellt.
Warnung vor der Macht einer nicht gewählten Institution
Der amerikanische Bericht beschreibt den gesamten Prozess als einen „unglaublichen Staatsstreich von oben“. Eine nicht gewählte Institution, die nach Ansicht der Autoren niemand ausdrücklich gewollt habe, habe begonnen, wie eine monströse Behörde mit nahezu diktatorischen Befugnissen zu agieren, die willkürlich und ohne echte Kontrolle handelt.
Da die Mitglieder dieser Struktur aus derselben politischen Schicht stammen, die nach Ansicht der Kritiker die EU-Mitgliedstaaten in tiefe Probleme geführt hat, sei aus diesen Ländern kein Widerstand zu erwarten. Im Gegenteil: Der Bericht spricht von einer „eifrigen Unterstützung“ weiterer Einschränkungen der Rechte und der Selbstbestimmung der eigenen Bürger.

THE FOREIGN CENSORSHIP THREAT, PART II:
EUROPE’S DECADE-LONG CAMPAIGN TO CENSOR THE GLOBAL INTERNET
AND HOW IT HARMS AMERICAN SPEECH IN THE UNITED STATES
Interim Staff Report of the
Committee on the Judiciary
of the
U.S. House of Representatives
February 3, 2026
1
EXECUTIVE S UMMARY
The Committee on the Judiciary of the U.S. House of Representatives is investigating
how and to what extent foreign laws, regulations, and judicial orders compel, coerce, or
influence companies to censor speech in the United States.1 As part of this oversight, the
Committee has issued document subpoenas to ten technology companies, requiring them to
produce communications with foreign governments, including the European Commission and
European Union (EU) Member States, regarding content moderation.2 In July 2025, the
Committee published a report detailing how the European Commission—the executive arm of
the EU—weaponizes the Digital Services Act (DSA), a law regulating online speech, to impose
global online censorship requirements on political speech, humor, and satire.3 Since then,
pursuant to subpoena, technology companies have produced to the Committee thousands of
internal documents and communications with the European Commission. These documents show
the extent—and success—of the European Commission’s global censorship campaign.
The European Commission, in a comprehensive decade-long effort, has successfully
pressured social media platforms to change their global content moderation rules, thereby
directly infringing on Americans’ online speech in the United States. Though often framed as
combating so-called “hate speech” or “disinformation,” the European Commission worked to
censor true information and political speech about some of the most important policy debates in
recent history—including the COVID-19 pandemic, mass migration, and transgender issues.
After ten years, the European Commission has established sufficient control of global online
speech to comprehensively suppress narratives that threaten the European Commission’s power.
Prior to the Committee’s subpoenas, these efforts largely occurred in secret. Now, the
European Commission’s efforts have come to light for the first time, informing the Committee
on legislative steps it can take to protect American free speech online.
1 See, e.g., Press Release, H. Comm. on the Judiciary, Chairman Jordan Subpoenas Big Tech for Information on
Foreign Censorship of American Speech (Feb. 26, 2025), https://judiciary.house.gov/media/press-releases/chairman-
jordan-subpoenas-big-tech-information-foreign-censorship-american.
2 Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Timothy Cook, CEO, Apple (Feb. 26,
2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Andy Jassy,
President and CEO, Amazon (Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H.
Comm. on the Judiciary, to Mr. Satya Nadella, CEO, Microsoft (Feb. 26, 2025) (attaching subpoena); Letter from
Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Christopher Pavlovski, Chairman and CEO, Rumble
(Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr.
Sundar Pichai, CEO, Alphabet (Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H.
Comm. on the Judiciary, to Custodian of Records, TikTok (Feb. 26, 2025) (attaching subpoena); Letter from Rep.
Jim Jordan, Chairman, H. Comm. on the Judiciary, to Ms. Linda Yaccarino, CEO, X (Feb. 26, 2025) (attaching
subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Mark Zuckerberg, CEO,
Meta (Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to
Mr. Steve Huffman, CEO & President, Reddit (Apr. 17, 2025) (attaching subpoena). Letter from Rep. Jim Jordan,
Chairman, H. Comm. on the Judiciary, to Mr. Sam Altman, CEO, OpenAI (Nov. 5, 2025) (attaching subpoena).
3 STAFF OF THE H. COMM . ON THE JUDICIARY, 119 TH CONG., T HE FOREIGN CENSORSHIP T HREAT : H OW THE
E UROPEAN U NION’ S D IGITAL SERVICES A CT COMPELS G LOBAL CENSORSHIP AND I NFRINGES ON A MERICAN FREE
SPEECH (Comm. Print July 25, 2025) (hereinafter “DSA Censorship Report I”).
2
The DSA is the culmination of a decade-long European effort to silence political opposition
and suppress online narratives that criticize the political establishment.
The DSA took effect in 2023, and the European Commission issued the first-ever fine
under the DSA in December 2025 against X. Although the DSA has been in effect for less than
three years, the fine against X represents the culmination of a decade-long effort by the European
Commission to control the global internet in order to suppress disfavored narratives online.
The internet and social media initially promised to be a force that would democratize
speech, and with it, political power. This development threatened the established political order,
and by the mid-2010s, the political establishments in the United States and Europe sought to
counter rising populist movements that questioned deeply unpopular policies such as mass
migration. Recognizing that tackling this problem would take several years, starting in 2015 and
2016, the European Commission began creating various forums in which European regulators
could meet directly with technology platforms to discuss how and what content should be
moderated. Though ostensibly meant to combat “misinformation” and “hate speech,” nonpublic
documents produced to the Committee show that for the last ten years, the European
Commission has directly pressured platforms to censor lawful, political speech in the European
Union and abroad.
3
The EU Internet Forum (EUIF), founded in 2015 by the European Commission’s
Directorate-General for Migration and Home Affairs (DG-Home), was among the first of these
initiatives. By 2023, EUIF published a “handbook . . . for use by tech companies when
moderating” lawful, non-violative speech such as:
- “Populist rhetoric”;
- “Anti-government/anti-EU” content;
- “Anti-elite” content;
- “Political satire”;
- “Anti-migrants and Islamophobic content”;
- “Anti-refugee/immigrant sentiment”;
- “Anti-LGBTIQ . . . content”; and
- “Meme subculture.”4
4 EU Internet Forum: The Handbook of Borderline Content in Relation to Violent Extremism, see Ex. 38.
4
The European Commission also
enforced its censorship goals through
allegedly voluntary “codes of conduct”
on hate speech and disinformation. In
2016, the European Commission
established a “Code of Conduct on
Countering Illegal Hate Speech
Online,” under which platforms
including Facebook, Instagram,
TikTok, and Twitter (now X) promised
to censor vaguely defined “hateful
conduct.”5
A “Code of Practice on
Disinformation,” in which the same
major platforms promised to “dilute the
visibility” of alleged “disinformation,”
followed in 2018.6 In high-level
meetings with platforms, senior
European Commission officials
explicitly told the platforms that the
Hate Speech and Disinformation Codes
were intended to “fill [the] regulatory
gap” until the EU could enact binding
legislation governing platform “content
moderation.”7
At around the same time, the
most powerful EU Member States, such
as Germany, began enacting censorship
legislation at the national level.8
5 The EU Code of Conduct on Countering Illegal Hate Speech Online, E UROPEAN COMM’ N (June 30, 2016),
https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/combatting-
discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en.
6 2018 Code of Practice on Disinformation, E UROPEAN COMM ’ N (June 16, 2022), https://digital-
strategy.ec.europa.eu/en/library/2018-code-practice-disinformation.
7 Readout of meeting between TikTok and European Commission Vice President Vera Jourova (Apr. 20, 2021), see
Ex. 55.
8 See Imara McMillan, Enforcement Through the Network: The Network Enforcement Act and Article 10 of the
European Convention on Human Rights, 20 CHIC . J. I NT . L. 252 (2019).
5
Later, in 2022 and right as the DSA was about to take effect, the European Commission
updated the 2018 Disinformation Code. Under the new guidelines, platforms had to participate in
a Disinformation Code “Task Force,” which would meet regularly to discuss platforms’
approach to censoring so-called disinformation.9 The Task Force broke into six “subgroups”
focusing on specific disinformation topics, including fact-checking, elections, and
demonetization of conservative news outlets.10 Across all of these subgroups, there were more
than 90 meetings between platforms, censorious civil society organizations (CSOs), and
European Commission regulators between late 2022 and 2024.11
These meetings were a key forum for European Commission regulators to pressure
platforms to change their content moderation rules and take additional censorship steps. For
example, in over a dozen meetings of the Crisis Response Subgroup, the European Commission
inquired about platforms’ “policy changes” “related to fighting disinformation.”12
9 The 2022 Code of Practice on Disinformation, E UROPEAN COMM’ N (June 16, 2022).
10 See infra Sec. III.F.ii.
11 Id.
12 See, e.g., Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Dec. 14, 2023), see Ex. 196.
6
The “voluntary” and “consensus-driven” European censorship regulatory regime is neither
voluntary nor consensus-driven.
Both before and after the DSA’s enactment, the European Commission established
several forums to engage regularly with platforms about content moderation, including the Hate
Speech Code and the Disinformation Code. These forums, which collectively held more than 100
meetings where regulators had the opportunity to pressure platforms to censor content more
aggressively, were purportedly voluntary and intended to achieve “consensus” through a so-
called regulatory dialogue.13 None of that was true. As internal company emails bluntly reveal,
the companies knew that they “[didn’t] really have a choice” whether to join these voluntary
initiatives.14 And the European regulators were running the show: agendas were set “under
(strong) impetus from the EU Commission” and so-called “consensus” was achieved under
heavy pressure from the European Commission.15
Google staff noted that participation in Disinformation Code subgroup meetings was effectively
mandatory, and that the European Commission retained significant control over the agenda and
group decisions.
13 Internal emails among Google staff (June 22, 2023), see Ex. 2; see infra Sec. III.F.
14 Id.
15 Id.
7
The European Commission successfully pressured major social media platforms to change
their global content moderation rules, directly infringing on American online speech in the
United States.
Most major social media or video sharing platforms are based in the United States16 and
have a single, global set of rules governing what content can or cannot be posted on the site.17
These rules set the boundary for what discourse is allowed in the modern town square, making
them a key pressure point for regulators seeking narrative control to tighten their grip on political
power. Critically, platform content moderation rules are—and effectively must be—global in
scope.18 Country-by-country content moderation is a significant privacy threat, requiring
platforms to know and store each user’s specific location every time he or she logs on.19 In an
age where users can freely use virtual private networks (VPNs) to simulate their location and
protect their personal information, country-by-country content moderation is also ineffective20—
in addition to creating immense costs for platforms of all sizes.21 The internet is global, and
platforms govern themselves accordingly. That means that when European regulators pressure
social media companies to change their content moderation rules, it affects what Americans can
say and see online in the United States. European censorship laws affecting content moderation
rules are therefore a direct threat to U.S. free speech.
Years before the DSA’s enactment, the European Commission made these platform
content moderation rules its primary target. During the COVID-19 pandemic, senior European
Commission officials pressed platforms to change their content moderation rules to globally
censor content questioning established narratives about the virus and the vaccine.22 With the
approval of EU President Ursula von der Leyen and Vice President Vera Jourova, the European
Commission asked platforms how they planned to “update[] . . . [their] terms of service or
content moderation practices (promotion / demotion)” ahead of the rollout of COVID-19
vaccines.23
16 Examples include Facebook, Instagram, YouTube, and X. The notable exception TikTok, is Chinese-owned, but
is transitioning its U.S. operations to majority-American ownership under a deal negotiated by President Trump. See
Clare Duffy, The deal to secure TikTok’s future in the US has finally closed, CNN (Jan. 23, 2026).
17 See, e.g., Community Standards, M ETA, https://transparency.meta.com/policies/community-standards/ (last visited
Jan. 29, 2026); YouTube’s Community Guidelines, Y OUT UBE H ELP,
https://support.google.com/youtube/answer/9288567?hl=en (last visited Jan. 29, 2026); The X Rules, X,
https://help.x.com/en/rules-and-policies/x-rules (last visited Jan. 29, 2026); Community Guidelines, T IK T OK ,
https://www.tiktok.com/community-guidelines/en (last visited Jan. 29, 2026).
18 Id.; see, e.g., YouTube Community Guidelines enforcement, G OOGLE T RANSPARENCY REPORT (last visited Jan. 29,
2026), https://transparencyreport.google.com/youtube-policy/removals (“YouTube’s Community Guidelines are
enforced consistently across the globe, regardless of where the content is uploaded. When content is removed for
violating our guidelines, it is removed globally.”); Community Guidelines, T IK T OK ,
https://www.tiktok.com/support/faq_detail?id=7543604781873371654 (last accessed Jan. 29, 2026) (“Our
Community Guidelines apply to our global community and everything shared on TikTok.”).
19 See Rumble Inc.’s Response to an Order to Produce Records from British Columbia’s Office of Human Rights
(Aug. 31, 2022); Ex. 288 (confirming that some platforms do not currently collect detailed location information of
users).
20 See DSA Censorship Report I, supra note 3, at 31.
21 See, e.g., Trevor Wagener, The High Cost of State-by-State Regulation of Internet Content Moderation,
D ISRUPTIVE COMPETITION PROJECT (Mar. 17, 2021).
22 See e.g., Emails between TikTok staff and European Commission staff (Oct. 30, 2020), see Ex. 48.
23 Id.
8
Pressure to change content moderation rules related to COVID-19 vaccines came from the
highest levels of the European Commission.
9
Throughout the European Commission’s censorship campaign, the countless
Disinformation Code, Hate Speech Code, and EU Internet Forum meetings provided more than
100 opportunities for the European Commission to pressure platforms to modify their content
moderation policies and identify which online narratives on vaccines and other important
political topics should be censored.24 For example, on over a dozen occasions over the course of
just three years, the European Commission used the Disinformation Code Crisis Response
Subgroup meetings to press platforms, such as YouTube and TikTok, on their “new
developments and actions related to fighting disinformation,” specifically referencing “policy
changes.”25
A characteristic agenda for meetings between the European Commission, platforms, and NGOs
where the Commission applied pressure to change content moderation policies.
24 See infra Sec. III.
25 See, e.g., Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Dec. 14, 2023), see Ex. 196
(emphasis added).
10
The pressure on platforms to comply with Europe’s censorship demands only intensified
once the DSA was signed into law in October 2022. The European Commission warned
platforms that they needed to change their global content moderation rules to comply with the
DSA, or else risk fines up to six percent of global revenue and a possible ban from the European
market.26
This decade-long pressure campaign was successful: platforms changed their content
moderation rules and censored speech worldwide in direct response to the DSA and European
Commission pressure. For example, in 2023, TikTok began editing its global Community
Guidelines for the express purpose of “achiev[ing] compliance with the Digital Services Act.”27
TikTok made changes to its global Community Guidelines in order to comply with the DSA.
These new censorship rules went into effect in 2024. In response to the European
Commission’s decade-long censorship campaign, TikTok instituted new rules censoring
“marginalizing speech,” including “coded statements” that “normalize inequitable treatment,”
“misinformation that undermines public trust,” “media presented out of context” and
“misrepresent[ed] authoritative information.”28 These standards are inherently subjective and
easily weaponized against the European Commission’s political opposition. In fact, these internal
documents show that TikTok systematically censored true information around the world to
comply with the European Commission’s censorship demands under the DSA. The document
outlining these changes confirmed that, as “advised by the legal team,” the updates were “mainly
related to compliance with the Digital Services Act (DSA).”29
26 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single
Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act), 2022 O.J. (L 277) Art. 36,
52 (hereinafter “Digital Services Act”).
27 TikTok Community Guidelines Survey, see Ex. 15.
28 TikTok Community Guidelines Update Executive Summary (Mar. 20, 2024), see Ex. 8.
29 Id. (emphasis in original).
11
In response to European Commission pressure, TikTok modified its global Community Guidelines
to censor true information and directly affecting American speech in the United States.
12
Documents indicate that these may not have been the only content moderation changes
instituted in response to the DSA, either. During a presentation to the European Commission in
July 2023, TikTok noted that “units with day-to-day activities overlapping the DSA, like Trust &
Safety . . . [were] given new policies, rules, & [standard operating procedures]” to comply with
the DSA.30 These internal documents suggest that TikTok changed significant portions of its
extensive content moderation systems to comply with the European Commission’s demands.
The European Commission’s focus on global content moderation rules remains: in May
2025, the European Commission explicitly told platforms at a closed-door “DSA Workshop” that
“continuous review of [global] community guidelines” was a best practice for compliance with
the DSA.31
The European Commission’s “best practices” for DSA compliance includes “continuous”
changes to global content moderation rules.
The European Commission is specifically focused on censorship of U.S. content.
Not only did the European Commission harm American speech in the United States by
pressuring platforms to change their global content moderation policies, but it also specifically
sought to censor American content.
This, too, began during the COVID-19 pandemic. In November 2021, the European
Commission requested information about how TikTok planned to “fight disinformation about the
covid 19 vaccination campaign for children starting in the US,” inquiring specifically about
TikTok’s plans to “remove” certain “claims” about the efficacy of the COVID-19 vaccine in
children.32
30 TikTok Slide Deck: Digital Services Act, Readiness overview for the European Commission (July 17, 2023), see
Ex. 3.
31 European Commission – DSA Systemic Risk Assessment Workshop Readout (May 7, 2025), see Ex. 206.
32 Emails between TikTok staff and European Commission staff (Nov. 5, 2021), see Ex. 58.
13
A year later, European Commission regulators pressured platforms to remove an
American documentary film about vaccines, demanding that YouTube, Twitter, and TikTok
“check . . . internally” and respond “in writing” why the film had not been censored.33 YouTube
responded to the European Commission promptly, stating that it “removed” the film in question
after the European Commission raised the issue.34 Put plainly, the European Commission treated
American debates around vaccination as within scope of the European Commission’s regulatory
authority.
European Commission regulators urged TikTok to censor U.S. claims about COVID-19 vaccines
for children.
The European Commission’s focus on American speech was not limited to only COVID-
19-related content, either. Political appointees at the highest levels of the European Commission
pressured TikTok to more aggressively censor U.S. content ahead of the 2024 U.S. presidential
election.
33 Emails between European Commission staff and Code of Practice on Disinformation Signatories (Dec. 8, 2022),
see Ex. 96.
34 Id.
14
Most infamously, then-EU Commissioner
for Internal Market Thierry Breton sent a letter to
X owner Elon Musk in August 2024 ahead of
Musk’s interview with President Donald Trump.35
Breton threatened X with regulatory retaliation
under the DSA for hosting a live interview with
President Trump in the United States, warning that
“spillovers” of U.S. speech into the EU could spur
the Commission to adopt retaliatory “measures”
against X under the DSA.36 Breton threatened that
the European Commission “[would] not hesitate to make full use of [its] toolbox” to silence this
core American political speech.37 In response to Breton’s threats, the Committee sent two letters
outlining how his threats undermined free speech in the United States and constituted election
meddling in the American presidential election.38 Shortly thereafter, Breton resigned.39
Commissioner Breton’s August 2024 letter to Elon Musk warned that the European Commission
would “make full use of [its] toolbox” if X failed to adequately censor Musk’s interview with
President Trump.
The European Commission sought to minimize Breton’s letter to Musk as an unapproved
freelance from a rogue Commissioner acting alone.40 But months before Commissioner Breton’s
35 Letter from Mr. Thierry Breton, Comm’r for Internal Market, European Comm’n, to Mr. Elon Musk, Owner, X
Corp. (Aug. 12, 2024).
36 Id.
37 Id.
38 Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Thierry Breton, Comm’r for Internal
Market, European Comm’n (Aug. 15, 2024); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to
Mr. Thierry Breton, Comm’r for Internal Market, European Comm’n (Sept. 10, 2024).
39 See Lorne Cook, A French Member of the European Commission Resigns and Criticizes President von der Leyen,
AP (Sep. 16, 2024).
40 See Bradford Betz, EU regulator wasn’t cleared to warn Musk against amplifying ‘harmful content’ with Trump X
interview: report, FOX BUSINESS (Aug. 13, 2024).
Former EU Commissioner Thierry Breton
15
letter, other senior European Commission officials were similarly pressing Big Tech executives
for more information on how they planned to moderate election-related speech ahead of the 2024
U.S. presidential election.
In May 2024, European Commission Vice President
Jourova traveled to California to meet with major tech
platforms. During this trip, Jourova met with TikTok CEO
Shou Chew and TikTok’s Head of Trust and Safety to discuss
topics including “election preparations.”41 TikTok sought
confirmation on whether the European Commission Vice
President was traveling all the way to California to have a
meeting that “stay[ed] mostly EU focused,” or whether she
wanted to discuss “both EU and US election preparations.”42 The
European Commission confirmed that Vice President Jourova
wanted to discuss “both.”43
European Commission Vice President Vera Jourova asked to discuss “US election
preparations” with TikTok ahead of the 2024 U.S. presidential election.
41 Emails between TikTok staff and European Commission staff (May 28, 2024), see Ex. 27.
42 Id.
43 Id.
Former EU Vice President
Vera Jourova
16
The European Commission regularly interferes in EU Member State national elections.
The European Commission works to influence EU Member States by controlling political
speech during election periods. Most strikingly, the European Commission issued DSA Election
Guidelines in 2024 requiring platforms to take additional censorship steps ahead of major
European elections, such as: - “Updating and refining policies, practices, and algorithms” to comply with EU
censorship demands; - Complying with “best practices” outlined in the Disinformation Code, the Hate
Speech Code, and EUIF documents; - “Establishing measures to reduce the prominence of disinformation”;
- “Adapt[ing] their terms and conditions . . . to significantly decrease the reach
and impact of generative AI content that depicts disinformation or
misinformation”; - “Label[ing]” posts deemed to be “disinformation” by government-approved, left-
wing fact-checkers; - “Developing and applying inoculation measures that pre-emptively build
resilience against possible and expected disinformation narratives”;44 and - Taking additional steps to stop “gendered disinformation.”45
These DSA Election Guidelines were branded as voluntary best practices.46 But behind
closed doors, the European Commission made clear that the Election Guidelines were obligatory.
Prabhat Agarwal, the head of the Commission’s DSA enforcement unit, described the Guidelines
as a floor for DSA compliance, telling platforms that if they deviated from the best practices,
they would need to “have alternative measures that are equal or better.”47
Moreover, the European Commission’s election censorship mandates likely had
extraterritorial effects. For example, companies disclose in mandatory reports to the European
44 U.S. agencies used this tactic before the 2020 presidential election to cast a true story about Biden family
influence peddling as Russian disinformation. As a result, Big Tech censored the true story in the weeks preceding
the election. See STAFF OF THE H. COMM. ON THE JUDICIARY AND THE SELECT SUBCOMM . ON THE WEAPONIZATION
OF THE FED. G OV’ T OF THE H. COMM . ON THE JUDICIARY, 118 TH CONG., E LECTION I NTERFERENCE : H OW THE FBI
“PREBUNKED” A TRUE STORY A BOUT THE BIDEN FAMILY’ S CORRUPTION IN A DVANCE OF THE 2020 PRESIDENTIAL
E LECTION (Comm. Print Oct. 30, 2024).
45 Commission Guidelines for providers of Very Large Online Platforms and Very Large Online Search Engines on
the mitigation of systemic risks for electoral processes pursuant to Article 35(3) of Regulation (EU) 2022/2065, No.
C/2024/3014 (Apr. 26, 2024) (hereinafter “DSA Election Guidelines”) (emphasis in original).
46 Id.
47 Internal Meta readout of Roundtable on DSA Elections Guidelines (Mar. 1, 2024), see Ex. 243.
17
Commission the company’s standard election-related “policies, tools, and processes.”48 The
European Commission regularly engages with large social media platforms on what election-
related changes should be made, and hosts DSA-related discussions in non-EU countries.49
Since the DSA came into force in 2023, the European Commission has pressured
platforms to censor content ahead of national elections in Slovakia, the Netherlands, France,
Moldova, Romania, and Ireland, in addition to the EU elections in June 2024.50 Nonpublic
documents produced to the Committee pursuant to subpoena demonstrate how the European
Commission regularly pressured platforms ahead of EU Member State national elections in order
to disadvantage conservative or populist political parties.
Nonpublic meeting agendas and readouts show that the European Commission regularly
convened meetings of national-level regulators, left-wing NGOs, and platforms prior to elections
48 See, e.g., id.; Email from Meta to European Commission (July 10, 2024), see Ex.166.
49 See, e.g., Email from Meta to European Commission (July 10, 2024), see Ex.166; Agenda for the 11th Meeting of
the EU Support Hub for International Security and Border Management in Moldova on “Countering Foreign
Information Manipulation and Interference” (Sep. 18, 2024), see Ex. 251.
50 See infra Sec. V.B.iv.
18
to discuss which political opinions should be censored.51 The European Commission also helped
to organize “rapid response systems” where government-approved third parties were empowered
to make priority censorship requests that almost exclusively targeted the ruling party’s
opposition.52 TikTok reported to the European Commission that it censored over 45,000 pieces
of alleged “misinformation,” including clear political speech on topics including “migration,
climate change, security and defence and LGBTQ rights,” ahead of the 2024 EU elections.53
The 2023 Slovak election is one key example. TikTok’s internal content moderation
guides show that TikTok censored the following “hate speech” while facing European censorship
pressure: - “There are only two genders”;
- “Children cannot be trans”;
- “We need to stop the sexualization of young people/children”;
- “I think that LGBTI ideology, gender ideology, transgender ideology are a big
threat to Slovakia, just like corruption”; and - “Targeted misgendering.”54
These statements are not “hate speech”—they are political opinions about a current contentious
scientific and medical issue. TikTok itself noted that some of these political opinions were
“common in the Slovak political discussions.”55 Yet, under pressure from the European
Commission, TikTok censored these claims ahead of Slovakia’s national parliamentary elections.
The European Commission took its most aggressive censorship steps during the 2024
Romanian presidential election. In December 2024, Romania’s Constitutional Court annulled the
results of the first round of the previous month’s presidential election, won by little-known
independent populist candidate Calin Georgescu, after Romanian intelligence services alleged
that Russia had covertly supported Georgescu through a coordinated TikTok campaign.56
Internal TikTok documents produced to the Committee seem to undercut this narrative.57 In
submissions to the European Commission, which used the unproven allegation of Russian
interference to investigate TikTok’s content moderation practices, TikTok stated that it “ha[d]
not found, nor been presented with, any evidence of a coordinated network of 25,000 accounts
associated with Mr. Georgescu’s campaign”—the key allegation by the intelligence authorities.58
51 See infra Sec. V.B.
52 Id.
53 TikTok 2024 European Parliament Elections Confidential Report (Sept. 24, 2024), see Ex. 253.
54 TikTok Internal Content Moderation Guidelines for 2023 Slovak Election (Sept. 22, 2023), see Ex. 224.
55 Id.
56 See Thomas Grove & Alan Cullison, Romania Scraps Election After Russian Influence Allegations, WALL . ST . J.
(Dec. 6, 2024).
57 See, e.g., TikTok Response to Commission RFI (Dec. 13, 2024), Ex. 268; TikTok Response to Commission RFI
(Dec. 7, 2024), Ex. 266.
58 TikTok Response to Commission RFI (Dec. 7, 2024), see Ex. 266.
19
By late December 2024, media reports citing evidence from Romania’s tax authority found that
the alleged Russian interference campaign had, in fact, been funded by another Romanian
political party.59 But the election results were never reinstated, and in May 2025, the
establishment-preferred candidate won Romania’s presidency in the rescheduled election.60
TikTok informed the European Commission that it had “not found, nor been presented with, any
evidence” to support Romanian authorities’ key allegation of Russian interference.
The European Commission is continuing to weaponize the DSA to censor content beyond
its borders.
After a decade of censorship, the European Commission continues to abandon Europe’s
historical commitment to free speech.
In December 2025, the European Commission issued its first fine under the Digital
Services Act, targeting X for a litany of ridiculous violations in an obviously pretextual attempt
to penalize the platform for its defense of free speech.61 The European Commission fined X €120
million—slightly below the statutory cap of six percent of global revenue—for alleged violations
59 See Denis Cenusa, Romanian liberals orchestrated Georgescu campaign funding, investigation reveals, BNE
I NTELLINEWS (Dec. 22, 2024).
60 See Sarah Rainsford et al., Liberal mayor Dan beats nationalist in tense race for Romanian presidency, BBC
(May 19, 2025).
61 Commission Decision of 5.12.2025 pursuant to Articles 73(1), 73(3) and 74(1) of Regulation (EU) 2022/2065 of
the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and
amending Directive 2000/31/EC (Digital Services Act); Case DSA.100101, DSA.100102 and DSA.100103 – X
(formerly Twitter), C(2025) 8630 final; see Ex. 302 (hereinafter “X Decision”); see also House Judiciary GOP
(@JudicaryGOP), X (Jan. 28, 2026, 4:09 PM), https://x.com/JudiciaryGOP/status/2016619751183724789.
20
including “misappropriating” the meaning of blue checkmarks by changing how they were
awarded.62
Moreover, despite the European Commission’s protestations that the DSA applies only in
the EU,63 its X decision enforced the DSA in an extraterritorial manner. The decision asserts that
under the DSA’s researcher access provision, X, an American company, must hand over
American data to researchers around the world—all because of a European law.64 And the
European Commission threatened to ban X in the EU if it does not comply with its censorship
demands.65 The European Commission’s decision to fine X is chilling in at least two distinct
ways: it penalizes X for its global defense of free speech, and it claims the authority to enforce
the DSA globally. It is everything the Committee has warned about for well over a year.
The European Commission accused X of violating the DSA by “misappropriating” the meaning
of a blue checkmark.
Two recent EU initiatives also threaten to worsen the European free speech crisis. Under
President von der Leyen’s “Democracy Shield,” the European Commission will create at least
two new censorship hubs for regulators and left-wing NGOs to pressure platforms to censor
conservative content—the European Center for Democratic Resilience and the European
Network of Fact-Checkers.66 Under the same proposal, the European Commission is seeking to
expand the Disinformation Code to include requirements related to “user verification tools,”
which could effectively end anonymity on the internet by requiring users to show identification
in order to create an account.67 The Commission is also seeking to circumvent normal
democratic processes to create a single, expansive definition of illegal “hate speech” across
Europe.68 This would require every EU member state to adopt the Commission’s definition,
which includes conventional political discourse and “memes.”69 The European censorship threat
shows no signs of abating.
62 Id.
63 See Letter from Thierry Breton, Comm’r for Internal Market, European Comm’n, to Rep. Jim Jordan, Chairman,
H. Comm. on the Judiciary (Aug. 21, 2024).
64 X Decision, supra note 61.
65 Id.
66 Joint Communication to the European Parliament, the Council, the European Economic and Social Committee and
the Committee of the Regions: European Democracy Shield: Empowering Strong and Resilient Democracies,
JOIN(2025) 791 final (hereinafter “Democracy Shield Proposal”).
67 Id. Governments could then compel platforms to produce this information in order to target anonymous speakers
with which it disagrees.
68 Union of Equality: LGBTIQ+ Equality Strategy 2026-2030, E UROPEAN COMM’ N, COM(2025) 725 final at 6.
69 DSA Censorship Report I, supra note 3, at 28.
21
The Committee is conducting its investigation into foreign censorship laws, regulations,
and judicial orders because of the risk they pose to American speech in the United States. The
EU’s DSA, in particular, represents a grave danger to American freedom of speech online: the
European Commission has intentionally pressured technology companies to change their global
content moderation policies, and deliberately targeted American speech and elections. The
European Commission’s extraterritorial actions directly infringe on American sovereignty. The
Committee will continue to develop legislative solutions to defend against and effectively
counter this existential risk to Americans’ most cherished right.
22
TABLE OF C ONTENTS
EXECUTIVE S UMMARY ……………………………………………………………………………………………………… 1
TABLE OF C ONTENTS ……………………………………………………………………………………………………… 22
I. INVESTIGATIVE H ISTORY ……………………………………………………………………………………… 24
II. THE DIGITAL S ERVICES A CT IS THE C ULMINATION OF EUROPE’S DECADE- LONG C AMPAIGN
TO C ONTROL ONLINE S PEECH. …………………………………………………………………………………………. 26
A. The European Commission launched coercive “codes of practice” on so-called
“disinformation” and “hate speech” to bridge the regulatory gap until the comprehensive
censorship law, the DSA, was in place. ………………………………………………………………………. 27
B. Germany led the way with its own comprehensive censorship law in 2017. ……………….. 28
C. The European Commission treated the Disinformation and Hate Speech Codes as
precursors to binding digital censorship legislation, coercing platforms to comply. …………. 29
D. The DSA is the culmination of the European Commission’s campaign to achieve global
online narrative control. ……………………………………………………………………………………………. 34
E. The “voluntary” and “consensus-driven” European censorship regulatory regime is neither
voluntary nor consensus-driven. ………………………………………………………………………………… 35
F. Today, the European Commission says explicitly that compliance with the Hate Speech
and Disinformation Codes effectively serves as a DSA safe harbor and platforms that do not
comply may face retaliation. ……………………………………………………………………………………… 37
III. THE EUROPEAN C OMMISSION HAS S UCCESSFULLY P RESSURED P LATFORMS TO C HANGE
THEIR GLOBAL C ONTENT M ODERATION P OLICIES , DIRECTLY H ARMING AMERICAN S PEECH IN THE
UNITED S TATES. ……………………………………………………………………………………………………………. 39
A. Because content moderation policies cannot be feasibly or effectively enforced on a
country-by-country basis, they are global in scope. ……………………………………………………… 40
B. The plain text of the Disinformation and Hate Speech Codes require platforms to change
their content moderation rules. ………………………………………………………………………………….. 42
C. The DSA’s text requires platforms to change their content moderation rules. ……………… 42
D. The European Commission attempted to censor speech questioning prevailing government
narratives about COVID-19 and vaccines. ………………………………………………………………….. 43
E. The European Commission pressured platforms to change their rules for content related to
Russia’s invasion of Ukraine. ……………………………………………………………………………………. 52
F. Regulators and NGOs frequently pressured platforms to change rules about moderation of
misinformation and disinformation in meetings of Disinformation Code signatories. ………. 60
G. Since the passage of the DSA, European Commission regulators have regularly pressured
platforms to change their global content moderation rules…………………………………………….. 70
H. Platforms changed their global content moderation rules in response to these European
Commission efforts. …………………………………………………………………………………………………. 76
IV. THE EUROPEAN C OMMISSION’S ATTEMPTS TO C ENSOR U.S. S PEECH. ………………………… 85
A. The European Commission asked platforms to censor U.S. content related to the COVID-
19 pandemic. …………………………………………………………………………………………………………… 85
B. The European Commission interfered in the U.S. political process by engaging with
platforms about content related to the 2024 U.S. presidential election. …………………………… 86
V. THE EUROPEAN C OMMISSION WEAPONIZES ITS C ENSORSHIP TOOLS TO S ILENCE
C ONSERVATIVE AND “ANTI-ESTABLISHMENT” P OLITICAL S PEECH. ………………………………………. 89
23
A. The EU Internet Forum encourages platforms to censor legal and non-violative political
speech. ……………………………………………………………………………………………………………………. 89
B. The European Commission and EU Member State regulators pressure platforms to censor
conservative and “anti-establishment” political speech during election periods. ………………. 96
VI. OPPORTUNITIES FOR R EFORM EXIST, BUT THE EUROPEAN C OMMISSION C ONTINUES TO USE
THE DSA AS A HEAVY-H ANDED C ENSORSHIP TOOL. ………………………………………………………… 124
A. The European Commission fined X €120 million for defending free speech and open
discourse online. ……………………………………………………………………………………………………. 124
B. The European Commission’s ongoing initiatives indicate that it remains committed to
censorship. ……………………………………………………………………………………………………………. 134
C. The European Commission seeks to export its censorship measures to other countries. 137
VII. C ONCLUSION …………………………………………………………………………………………………….. 139
APPENDIX …………………………………………………………………………………………………………………… 140
24
I. I NVESTIGATIVE H ISTORY
The Committee on the Judiciary is continuing to investigate how and to what extent
foreign laws, regulations, and judicial orders compel, coerce, or influence companies to censor
speech in the United States.70 The Committee’s focus on European efforts to censor speech in the
United States began in August 2024, when then-European Union (EU) Commissioner for
Internal Market Thierry Breton threatened X with regulatory retaliation under the Digital
Services Act (DSA) for hosting a live interview with President Trump in the United States ahead
of the 2024 U.S. presidential election.71 Although Breton subsequently resigned, his
replacement, Henna Virkkunen, professes a similar pro-censorship ideology.72 To date,
Virkkunen continues to support the DSA’s censorship provisions and actively enforces the law
against American companies.73
To better understand the European threat to free speech in the United States, the
Committee issued document subpoenas to ten major technology companies compelling them, in
part, to produce communications with regulators from the European Commission and EU
Member States related to censorship of online speech.74 The Committee documented its initial
findings in an interim staff report in July 2025.75 The non-public documents produced to the
Committee revealed that the DSA is used as a censorship tool that infringes on online speech,
including American speech in the United States. Specifically, the documents showed that
European regulators use the DSA to: (1) target core political speech that is neither harmful nor
70 See DSA Censorship Report I, supra note 3; Press Release, H. Comm. on the Judiciary, Chairman Jordan
Subpoenas Big Tech for Information on Foreign Censorship of American Speech (Feb. 26, 2025),
https://judiciary.house.gov/media/press-releases/chairmanjordan-subpoenas-big-tech-information-foreign-
censorship-american.
71 Letter from Mr. Thierry Breton, Comm’r for Internal Market, European Comm’n, to Mr. Elon Musk, Owner, X
Corp. (Aug. 12, 2024).
72 See, e.g., Pieter Haeck, EU Won’t Negotiate on Tech Rule Books in Trump Trade Talks, Brussels Says, POLITICO
(July 1, 2025) (“The European Union’s rules on content moderation, digital competition and artificial intelligence
are not up for negotiation with the U.S., the European Commission’s tech chief Henna Virkkunen says.”);
Confirmation Hearing of Henna Virkkunen, Executive Vice-President-Designate of the European Commission,
Jointly by Comm. on Industry, Rsch., and Energy & Comm. on the Internal Mkt. and Consumer Protection of the
European Parliament, Report Hearing, at 13-16 (Nov. 12, 2024).
73 See X Decision, supra note 61.
74 Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Timothy Cook, CEO, Apple (Feb. 26,
2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Andy Jassy,
President and CEO, Amazon (Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H.
Comm. on the Judiciary, to Mr. Satya Nadella, CEO, Microsoft (Feb. 26, 2025) (attaching subpoena); Letter from
Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Christopher Pavlovski, Chairman and CEO, Rumble
(Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr.
Sundar Pichai, CEO, Alphabet (Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H.
Comm. on the Judiciary, to Custodian of Records, TikTok (Feb. 26, 2025) (attaching subpoena); Letter from Rep.
Jim Jordan, Chairman, H. Comm. on the Judiciary, to Ms. Linda Yaccarino, CEO, X (Feb. 26, 2025) (attaching
subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Mark Zuckerberg, CEO,
Meta (Feb. 26, 2025) (attaching subpoena); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to
Mr. Steve Huffman, CEO & President, Reddit (Apr. 17, 2025) (attaching subpoena); Letter from Rep. Jim Jordan,
Chairman, H. Comm. on the Judiciary, to Mr. Sam Altman, CEO, OpenAI (Nov. 5, 2025) (attaching subpoena).
75 DSA Censorship Report I, supra note 3.
25
illegal; and (2) pressure online platforms to change their global content moderation policies in
response to European Commission demands.76
The documents also revealed how European regulators partner with pro-censorship civil
society organizations (CSOs) to achieve their censorship goals.77 These organizations advocate
for broader definitions of “hate speech” and “disinformation.”78 One CSO, Access Now, even
claimed during a “DSA Workshop” hosted by the European Commission last year that
platforms’ content moderation efforts should “go beyond illegal content and lead to removal of
everything that can be considered as hateful and harmful.”79 In July 2025, to gain a better
understanding of how European regulators interact with third party organizations to censor
online speech, the Committee requested and obtained documents from two CSOs, Access Now
and the Institute for Strategic Dialogue.80
In October 2025, the Committee requested and received documents from Stanford
University after discovering that it hosted a September 2025 event in which the censorship
regulators from several foreign governments sought to coordinate a global censorship
campaign.81 This was not the first time Stanford engaged in a conspiracy against Americans’ free
speech rights: in 2020, the Stanford Internet Observatory played an important role in laundering
U.S. government censorship requests to social media platforms, enabling officials in the U.S.
government to covertly silence voices they disapproved of to influence the 2020 U.S.
presidential election.82
In response to these subpoenas and letters, the Committee has received tens of thousands
of pages of nonpublic, internal platform documents and communications with foreign regulators.
These documents detail the European Commission’s decade-long campaign to censor the global
internet.
76 Id. at 25–36.
77 Id. at 29.
78 Id.
79 Id. at 61.
80 Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Dixon Osburn, Executive Director,
Institute for Strategic Dialogue-US (July 25, 2025); Letter from Rep. Jim Jordan, Chairman, H. Comm. on the
Judiciary to Mr. Alejandro Mayoral Baños, Executive Director, Access Now (July 25, 2025).
81 Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Jeff Hancock, Director, Stanford
Cyber Policy Center (Oct. 22, 2025); see Michael Shellenberger, Obama-Linked Stanford Center Held Secret
Meeting with Foreign Governments to Plot Global Internet Censorship, P UBLIC N EWS (Oct. 28, 2025); Teddy
Ganea et al., Stanford’s Cyber Policy Center Coordinates International Internet Censorship, T HE STANFORD REV.
(Oct. 29, 2025).
82 See STAFF OF THE H. COMM . ON THE JUDICIARY AND THE SELECT SUBCOMM . ON THE WEAPONIZATION OF THE FED.
G OV’ T OF THE H. COMM . ON THE JUDICIARY, T HE WEAPONIZATION OF ‘D ISINFORMATION’ PSEUDO- EXPERTS AND
BUREAUCRATS: H OW THE FEDERAL G OVERNMENT PARTNERED WITH U NIVERSITIES TO CENSOR A MERICANS’ FREE
SPEECH, (Nov. 6, 2023).
26
II. T HE DIGITAL S ERVICES ACT IS THE CULMINATION OF E UROPE ’S DECADE -LONG
CAMPAIGN TO C ONTROL O NLINE S PEECH.
The European Union enacted the DSA in 2022, but its surrounding apparatus dates back
to the mid-2010s, when social media began to play an increasingly important role in political
debate. From the very beginning of the EU’s censorship campaign, senior EU leadership
envisioned a comprehensive digital censorship law giving the European Commission complete
online narrative control. European politicians and regulators were explicit about this objective,
particularly when meeting with platforms directly.
European concerns about so-called “hate speech” took shape in the mid-2010s as mass
migration overwhelmed the continent, igniting new political debates about multiculturalism,
assimilation, and the threat of terrorism.83 The political establishment in the United States and in
Europe blamed the election results in the 2016 U.S. presidential election and the 2017 French
presidential election on Russian interference, rather than a legitimate backlash from their citizens
to unpopular political decisions—most notably mass migration.
Under the pretext of combating so-called “hate speech” or “disinformation,” the
European Commission amassed power over online political discourse. Of course, the line
between “hate speech” and civil discourse or “misinformation” and truth—particularly in
complex, context-dependent political debates—is inherently subjective. Predictably, “hate
speech” and “misinformation” became branding tools that European regulators wielded against
political speech with which they disagreed or which they felt threatened their power. European
policymakers embarked on a decade-long campaign to silence the online speech of their
opponents—in Europe and beyond. The DSA was the culmination of this effort.
83 See, e.g., Austria’s Top Justice Official Explains Europe’s Approach to Hate Crimes and Hate Speech, CATHOLIC
U. OF A MERICA (Feb. 3, 2015) (noting that recent events “refocused Europe’s attention as never before on the
subject of hate speech”); Facebook, Google, and Twitter agree German hate speech deal, BBC (Dec. 15, 2015).
27
A. The European Commission launched coercive “codes of practice” on so-called
“disinformation” and “hate speech” to bridge the regulatory gap until the
comprehensive censorship law, the DSA, was in place.
In late 2015, the Commission formed the EU Internet Forum (EUIF) to “address[] the
misuse of the internet for terrorist purposes.”84 Since 2015, however, the EUIF has morphed
from a targeted initiative to stop online terrorist recruitment to a broad effort encouraging
platforms to censor legal and non-violative political speech.85 Specifically, EUIF now advises
platforms on how to best censor “borderline content”—that is, lawful content such as “anti-EU”
content, “political satire,” “meme[s],” and “populist rhetoric.”86
Europe’s efforts to control online discourse expanded in 2016 and 2018 with a pair of
“codes of conduct”—supposedly non-binding content moderation pledges from large social
media platforms. The Code of Conduct on Countering Illegal Hate Speech Online was unveiled
in 2016,87 while the Code of Practice on Disinformation was rolled out in 2018.88 Under both
Codes, platforms promised to censor content disfavored by European regulators. For example,
signatories to the Hate Speech Code promised to change their global content moderation rules to
bar “hateful conduct,”89 while Disinformation Code signatories committed to “dilute the
visibility” of alleged “disinformation”—meaning to censor it.90
GLOSSARY OF EU CENSORSHIP ACTIVITIES
European Union Internet
Forum (2015)
Launched in December 2015 by the European Commission’s
Directorate-General for Migration and Home Affairs, EUIF
began as an initiative bringing together law enforcement and
social media platforms to stop terrorist recruitment online.91
Since then, though, the Commission has used it to aggressively
advocate for censorship of legal, non-violative political
speech—especially conservative speech, which it often labels
as “violent right-wing extremism.”92
84 European Union Internet Forum, E UROPEAN COMM ’ N (July 25, 2025), https://home-
affairs.ec.europa.eu/networks/european-union-internet-forum_en.
85 See infra Sec. V.A.
86 See EU Internet Forum: The Handbook of Borderline Content in Relation to Violent Extremism, see Ex. 38.
87 The EU Code of Conduct on Countering Illegal Hate Speech Online, E UROPEAN COMM’ N (June 30, 2016),
https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/combatting-
discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en.
88 2018 Code of Practice on Disinformation, E UROPEAN COMM ’ N (June 16, 2022), https://digital-
strategy.ec.europa.eu/en/library/2018-code-practice-disinformation.
89 The EU Code of Conduct on Countering Illegal Hate Speech Online, E UROPEAN COMM’ N (June 30, 2016),
https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/combatting-
discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en.
90 2018 Code of Practice on Disinformation, E UROPEAN COMM ’ N (June 16, 2022), https://digital-
strategy.ec.europa.eu/en/library/2018-code-practice-disinformation.
91 European Union Internet Forum, E UROPEAN COMM ’ N (July 25, 2025), https://home-
affairs.ec.europa.eu/networks/european-union-internet-forum_en.
92 See infra Sec. V.A.
28
Code of Conduct on
Countering Illegal Hate
Speech Online (2016)
Established by the European Commission in May 2016,
signatories made 12 commitments, including a promise to
censor alleged “hateful conduct.”93 Signatories included
Facebook, Instagram, Microsoft, Snapchat, TikTok, and
Twitter (now X).94 The Code was replaced by the Code of
Conduct on Countering Illegal Hate Speech Online + in
2025.95
Code of Practice on
Disinformation (2018)
Established by the European Commission in 2018 and revised
in 2022, signatories made 21 commitments, including a
promise to “dilute the visibility” of alleged “disinformation.”96
Platform signatories included Facebook, Google, Microsoft,
TikTok and Twitter (now X), although X withdrew in 2023.97
The Commission describes the Code as the world’s first self-
regulatory standards for social media disinformation.98
B. Germany led the way with its own comprehensive censorship law in 2017.
Germany was one of the first EU Member States to take action at the national level,
passing the Network Enforcement Act (NetzDG), a comprehensive digital censorship law of its
own, in 2017. Under NetzDG, upon a user complaint, social media platforms are required to
assess the legality of content under “eighteen separate provisions of German criminal law,”
including draconian provisions criminalizing standard political speech as hate speech.99
Platforms are then required to remove speech deemed illegal within 24 hours.100 In Facebook’s
words, NetzDG created a regime of “when in doubt, delete,” inverting the traditional Western
principle that speech is presumptively lawful.101 Academics likewise criticized NetzDG, stating
that it “incentivizes ‘overblocking’ which could lead to the removal of lawful speech without due
process.”102 After passage, German courts soon ruled that NetzDG required global removals of
content illegal under German law because geo-blocked posts could still be viewed in Germany
with a virtual private network (VPN).103 These early efforts became the foundation of the
burgeoning European campaign to censor online speech worldwide.
93 The EU Code of Conduct on Countering Illegal Hate Speech Online, E UROPEAN COMM’ N (June 30, 2016),
https://commission.europa.eu/strategy-and-policy/policies/justice-and-fundamental-rights/combatting-
discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en.
94 Id.
95 See Press Release, European Comm’n, Commission Welcomes the Integration of the Revised Code of Conde on
Countering Illegal Hate Speech Online into the Digital Services Act (Jan. 19, 2025),
https://ec.europa.eu/commission/presscorner/detail/en/ip_25_300.
96 2018 Code of Practice on Disinformation, E UROPEAN COMM ’ N (June 16, 2022), https://digital-
strategy.ec.europa.eu/en/library/2018-code-practice-disinformation.
97 Id.; Francesca Gillett, Twitter pulls out of voluntary EU disinformation code, BBC (May 27, 2023).
98 Id.
99 Imara McMillan, Enforcement Through the Network: The Network Enforcement Act and Article 10 of the
European Convention on Human Rights, 20 CHIC . J. I NT . L. 252, 254 (2019).
100 Id.
101 John Rosenthal, Make Speech Free Again, CLAREMONT REV. OF BOOKS (Spring 2025).
102 Imara McMillan, Enforcement Through the Network: The Network Enforcement Act and Article 10 of the
European Convention on Human Rights, 20 CHIC . J. I NT . L. 252, 252 (2019).
103 Id. at 265-266.
29
C. The European Commission treated the Disinformation and Hate Speech Codes as
precursors to binding digital censorship legislation, coercing platforms to comply.
The Disinformation and Hate Speech Codes were simply a first step towards binding
censorship legislation. As early as 2016, then-EU Commissioner for Justice Vera Jourova said
that if platforms did not adequately comply with the terms of the Hate Speech Code, the EU
would simply make it binding law: “If Facebook, YouTube, Twitter and Microsoft want to
convince me and the ministers that the non-legislative approach can work, they will have to act
quickly and make a strong effort in the coming months.”104 In other words, these “voluntary”
Codes were anything but. Platforms knew that at some point, they would have to comply with
the EU’s censorship demands under threat of massive penalties. The easiest thing to do was to
comply with those demands immediately at the expense of free speech on their platforms.
Indeed, European regulators were already harassing U.S. tech companies on other topics.
Fines reaching billions of euros demonstrated the EU’s ability and willingness to punish non-EU
tech platforms that declined to comply with its demands. Alphabet, for example, was fined a
cumulative €8.2 billion by the Commission between 2017 and 2019 for allegedly favoring
Google Shopping in search results,105 pre-installing Google apps on Google phones,106 and
signing exclusive digital advertising agreements.107 On top of the knowledge that a binding
censorship law was coming, platforms also had to consider the EU’s pre-existing power to target
them with massive fines. This created additional pressure for platforms to censor content.
In 2019, incoming European Commission President Ursula von der Leyen proposed “a
new Digital Services Act,”108 which would have “the purpose of “turn[ing] the ‘voluntary’
commitments undertaken under the Codes into legal obligations.”109
GLOSSARY OF EU CENSORSHIP ACTIVITIES
Digital Services
Act (2022)
The European Union’s comprehensive digital censorship law, introduced
in 2020 and passed in 2022, that imposes significant legal obligations on
the world’s largest social media companies.110 The law requires platforms
to identify and “mitigat[e]” “systemic risks” on their sites, including
“misleading or deceptive content” and “disinformation,” “any actual or
foreseeable negative effects on civic discourse and electoral processes,”
“hate speech,” and “information which is not illegal.111 Platforms deemed
noncompliant with the DSA can be fined up to six percent of their global
revenue and, in some circumstances, banned from the EU.112
104 Liat Clark, Facebook and Twitter must tackle hate speech or face new laws, W IRED (Dec. 5, 2016).
105 Press Release, European Comm’n, Antitrust: Commission fines Google €2.42 billion for abusing dominance as
search engine by giving illegal advantage to own comparison shopping service (June 26, 2017).
106 Foo Yun Chee, Google challenges record $5 billion EU antitrust fine, REUTERS (Oct. 9, 2018).
107 Press Release, European Comm’n, Antitrust: Commission fines Google €1.49 billion for abusive practices in
online advertising (Mar. 19, 2019), https://ec.europa.eu/commission/presscorner/detail/en/ip_19_1770.
108 Ursula von der Leyen, A Union that strives for more: My agenda for Europe (2019).
109 John Rosenthal, Make Speech Free Again, CLAREMONT REV. OF BOOKS (Spring 2025).
110 Digital Services Act, supra note 26.
111 Id. at recitals 80, 84, Arts. 34–35.
112 Id. at Art. 36, 52.
30
By June 2020, seven months into President von der Leyen’s tenure, the European
Commission requested comment from tech platforms and other stakeholders on the development
of legislation to address so-called “disinformation” online, which eventually became the DSA.113
TikTok expressed concerns that a stringent DSA could result in “over blocking” of content.114
Yet even before the DSA’s formal introduction, the European Commission already discussed the
law as a fait accompli. In one November 2020 meeting, a Commission staffer stated matter-of-
factly to TikTok that “the DSA will provide general transparency provisions” and follow in the
footsteps of the Codes.115
European Commission regulators discussed the DSA as a foregone conclusion before draft text
was even publicly released.
The European Commission treated the Disinformation and Hate Speech Codes as simply
a precursor to the binding DSA. For example, in December 2020, shortly before the draft DSA
text was released, media reports indicated that the Commission planned to “beef up” the Code of
Practice on Disinformation to “plug the gap until the DSA comes into force.”116 Jourova, by this
point promoted to Vice President of the Commission for Values and Transparency, publicly
stated that this more aggressive Disinformation Code was intended to “support the vaccine
strategy by an efficient fight against disinformation,” indicating that political speech about
COVID-19 vaccine policy was the target of the European Commission’s censorship.117 Jourova
warned that the European Commission expected rapid censorship action, saying that “we are not
going to wait . . . we already have a very clear agreement with the platforms that they will
continue” the censorship measures they initiated in response to the Codes.118
Throughout 2021, the European Commission met with the platforms directly to repeat the
same message: the Disinformation and Hate Speech Codes were a precursor for the DSA, the
113 Natasha Lomas, Europe Asks for Views on Platform Governance and Competition Tools, T ECHCRUNCH (June 2,
2020).
114 TikTok White Paper: The Digital Services Act, see Ex. 12.
115 Readout of meeting between TikTok and Staff to European Commission Vice President Vera Jourova (Nov. 6,
2020), see Ex. 51 (emphasis added).
116 Natasha Lomas, Europe to put forward rules for political ads transparency and beef up its disinformation code
next year, T ECHCRUNCH (Dec. 3, 2020).
117 Id.; see also infra Sec. III.D.
118 Id.
31
passage of which was a foregone conclusion. In an April 2021 meeting with Interim TikTok
CEO Vanessa Pappas, Vice President Jourova’s staff called the Disinformation Code “a bridge to
legislation.”119 The staffer also clarified what these censorship obligations were, stating that
“content moderation is important.”120
European Commission Vice President Jourova and her staff cast the Disinformation Code as a
precursor to the DSA that circumvented the legislative process.
One month later, in May 2021, the European Commission formally began the process of
drafting an updated Disinformation Code with the stated goal of “evolving the existing Code of
Practice towards a co-regulatory instrument foreseen under the Digital Services Act (DSA).”121
The European Commission reinforced this message in a July 2021 presentation at “[t]he ninth
EU High level group on combating racism, xenophobia and other forms of intolerance,” stating
that the Disinformation and Hate Speech Codes would “be supervised by the [European]
Commission and the board” and “[could] serve as compliance measures” for the upcoming
DSA.122
119 Readout of meeting between TikTok and European Commission Vice President Vera Jourova (Apr. 20, 2021),
see Ex. 55. Pappas served as interim CEO of TikTok until late April 2021, when she was succeeded by Shou Chew.
See Todd Spangler, TikTok Names New CEO and Chief Operating Officer, V ARIETY (Apr. 30, 2021).
120 Id.
121 Guidance on strengthening the Code of Practice on Disinformation, E UROPEAN COMM ’ N, https://digital-
strategy.ec.europa.eu/en/policies/qa-code-practice-disinformation (last accessed Jan. 29, 2026).
122 European Commission Slide Deck: The ninth EU High level group on combatting racism, xenophobia and other
forms of intolerance (July 7, 2021), see Ex. 42.
32
The European Commission emphasized in a presentation to platforms that compliance with the
Hate Speech and Disinformation Codes was “supervised” and could serve as a safe harbor
against DSA enforcement.
During the process of rewriting both the Hate Speech Code and the Disinformation Code,
the European Commission sought to add language to clarify that they were de facto binding. In
October 2021, the European Commission proposed adding language to the Disinformation Code
stating that platforms should “adhere and comply” to the Code to comply with the DSA.123
Ultimately, the updated Disinformation Code would state that compliance “should be considered
as a possible risk mitigation measure under Article 35 of the DSA.”124
Similarly, during the 2023 rewrite of the Hate Speech Code, the European Commission
removed draft language about platforms “apply[ing] best efforts” to implement transparency
requirements, replacing it with language saying platforms “commit” to the requirements.125 The
European Commission also removed a draft sentence emphasizing the “voluntary nature” of the
code and acknowledging the “freedom of interested parties to decide whether or not to
participate.”126 These edits, along with the European Commission’s rhetoric in meetings with the
platforms, show that the European Commission wanted to make both Codes mandatory and
binding.
The European Commission’s edits and comment on a draft of the revised Hate Speech Code
show that it views the commitments as binding.
By early 2022, the European Parliament was in advanced negotiations over the final
provisions of the DSA.127 It was becoming clearer both behind closed doors and in public that
123 Draft EU Code of Practice on Disinformation, see Ex. 197; see Email from European Commission staff to Code
of Practice on Disinformation Signatories (Oct. 14, 2021), Ex. 56.
124 Code of Conduct on Disinformation, E UROPEAN COMM ’ N (Feb. 2025), Preamble (L) (hereinafter “Disinformation
Code”).
125 Draft Code of Conduct (+) on Countering Illegal Hate Speech Online, see Ex. 45.
126 Draft Code of Conduct (+) on Countering Illegal Hate Speech Online (Mar. 18, 2023), see Ex. 43.
127 See Press Release, Council of the European Union, Digital Services Act: Council and European Parliament
provisional agreement for making the internet a safer space for European citizens (Apr. 23, 2022),
https://www.consilium.europa.eu/en/press/press-releases/2022/04/23/digital-services-act-council-and-european-
parliament-reach-deal-on-a-safer-online-space/.
33
the Hate Speech and Disinformation Codes were effectively a playbook for DSA compliance. In
January 2022, during a meeting of signatories of the Disinformation Code, the Commission
stated that the Disinformation Code “is a mean [sic] for addressing systemic risks on online
platforms,”128 mirroring the language at the core of the DSA.129 Given the lack of alternative
guidance on how to comply with the DSA’s vague provisions, platforms were left little choice
but to implement the Disinformation Code’s censorship requirements.130
The European Commission called compliance with the Disinformation Code a way to “address[]
systemic risks” in accordance with the DSA.
The Council of the European Union and the European Parliament—the two legislative
arms of the EU—made a “provisional agreement” to approve the DSA in April 2022.131 By this
point, the Commission was publicly stating that the Disinformation Code was essentially
mandatory. In June 2022, the Commission tweeted that “the Code of Practice on Disinformation
will be backed up by the Digital Services Act, which means that companies that don’t comply
face fines of up to 6% of global turnover.”132 The European Commission’s effort to make the
“voluntary” Disinformation and Hate Speech Codes mandatory succeeded. And under the DSA,
platforms that refused to engage in the censorship required by the Hate Speech and
Disinformation Codes would be subject to massive fines.
128 New Code of Practice on Disinformation: Minutes of the 4th General Assembly of the Signatories (Jan. 28,
2022), see Ex. 61.
129 Digital Services Act, supra note 26, Art. 34-35.
130 See generally Internal emails among Google staff (June 22, 2023), see Ex. 2.
131 Press Release, Council of the European Union, Digital Services Act: Council and European Parliament
provisional agreement for making the internet a safer space for European citizens (Apr. 23, 2022),
https://www.consilium.europa.eu/en/press/press-releases/2022/04/23/digital-services-act-council-and-european-
parliament-reach-deal-on-a-safer-online-space/.
132 European Commission (@EU_Commission), X (June 16, 2022, 7:09 AM),
https://x.com/EU_Commission/status/1537391801182699521.
34
D. The DSA is the culmination of the European Commission’s campaign to achieve
global online narrative control.
The Hate Speech and Disinformation Codes were the precursor to the formally binding
digital censorship requirements under the DSA. European Commission officials have repeatedly
confirmed as much. Prabhat Agarwal, the European Commission’s top career official responsible
for implementing the DSA, said during a March 2024 event that the Hate Speech and
Disinformation Codes were intended to inflict “reputational damage” on platforms that were not
censoring enough content to placate the European Commission.133 The DSA, he said, gave
regulators something even better: “the law,” meaning binding censorship obligations.134 The
Hate Speech and Disinformation Codes were the first step in granting the European Commission
a say in platform content moderation. The DSA amounted to a complete takeover.
An internal read-out from a Google employee describes how a top DSA enforcer stated that the
Hate Speech and Disinformation Codes were a precursor to the DSA intended to inflict
“reputational damage on platforms.”
At the same event, Deputy Director-General Renate Nikolay of the Directorate-General
for Communications Networks, Content, and Technology (DG-Connect), the European
Commission office responsible for enforcing the DSA, proudly stated that the with the DSA, the
European Commission had “control of recommender systems.”135 Yet, Nikolay said this was
“not enough” and that the European Commission needed to “go further.”136 Nikolay’s comments
confirm what the DSA’s opponents have long argued: the law’s intent was online narrative
control.
An internal email read-out from a Google employee describes how a top EU official said that
censorship under the DSA did not go far enough.
133 Readout of “Protecting The 2024 Elections: From Alarm to Action” (March 8, 2024), see Ex. 244.
134 Id.
135 Id.
136 Id.
35
E. The “voluntary” and “consensus-driven” European censorship regulatory regime is
neither voluntary nor consensus-driven.
Alongside the DSA’s passage, the Commission attempted to strengthen the supposedly
voluntary codes and forums it had already established, including the Disinformation Code, to
engage more regularly with platforms about content moderation.
For example, in 2022, right as the DSA was about to take effect, the European
Commission updated the 2018 Disinformation Code. Under the new guidelines, platforms had to
participate in a Disinformation Code “Task Force,” which would meet regularly to discuss
platforms’ approach to censoring so-called disinformation.137 The Task Force broke into to six
“subgroups” focusing on specific disinformation topics, including fact-checking, elections, and
demonetization of conservative news outlets.138 Across all of these subgroups, there were more
than 90 meetings between platforms, censorious NGOs, and European Commission regulators
between late 2022 and 2024.139
These meetings were a key forum for European Commission regulators to pressure
platforms to change their content moderation rules and take additional censorship steps. For
example, in over a dozen meetings of the Crisis Response Subgroup, the European Commission
inquired about platforms’ “policy changes” “related to fighting disinformation.”140
137 Disinformation Code, supra note 124, § IX.
138 See infra Sec. III.F.ii.
139 Id.
140 See, e.g., Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Dec. 14, 2022), see Ex. 196.
36
Participation in the Disinformation Code was purportedly voluntarily and part of the
Europeans’ approach of having a so-called regulatory dialogue with platforms to achieve
“consensus.”141 But the companies knew that was not the case. When European Commission
Vice President Jourova asked Disinformation Code Task Force participants in June 2023 whether
platforms wanted to “open a new subgroup on Generative AI,” Google acknowledged internally
that it did not “really have a choice” whether to join—the Commission expected it.142
After the European Commission created a new Disinformation Code Subgroup, Google
employees discussed internally that “we don’t really have a choice” on whether to join.
141 Internal emails among Google staff (June 22, 2023), see Ex. 2.
142 Id.
37
Another Google employee explained to his colleagues how these “voluntary” groups
actually work: the agenda is set “under (strong) impetus from the EU Commission” and so-called
“consensus” is reached under heavy pressure from the European Commission “if they disagree”
with the companies.143
Google staff noted that participation in Disinformation Code subgroup meetings was effectively
mandatory, and that the European Commission retained significant control over the agenda and
group initiatives.
F. Today, the European Commission says explicitly that compliance with the Hate
Speech and Disinformation Codes effectively serves as a DSA safe harbor and
platforms that do not comply may face retaliation.
Since the passage of the DSA in October 2022, the European Commission has
consistently held that strict compliance with the Disinformation and Hate Speech Codes is the
only clear way to avoid DSA penalties. The updated Disinformation Code approved in 2022,
shortly before the DSA’s passage, states that compliance with the Code “should be considered as
a possible risk mitigation measure under Article 35 of the DSA.”144 The European Commission
made similar statements during the 2023 update of the Hate Speech Code. In March 2023, at the
outset of the Code’s rewrite, the European Commission told platforms in a Q&A that the Hate
Speech Code “intend[s] to play a part as a mitigation measure under the DSA risk assessment
requirements.”145
143 Id.
144 Disinformation Code, supra note 124, Preamble (L).
145 European Commission Question and answers on the Code revision (Mar. 27, 2023), see Ex. 44.
38
GLOSSARY OF EU CENSORSHIP ACTIVITIES
Updated Code of Practice on
Disinformation (2022)
The updated Disinformation Code contains 44
disinformation-related commitments and
establishes a framework for signatories to
closely collaborate and exchange
information.146 For example, it effectively
requires platforms to coordinate with
censorious NGOs and biased fact-checkers to
aggressively censor content during election
periods.147 This ostensibly “voluntary” Code
has been integrated into the DSA’s regulatory
framework, meaning that platforms can be
penalized for failing to comply with it.148
Code of Conduct on Countering Illegal
Hate Speech Online + (2025)
The revised Hate Speech Code, which
debuted in January 2025, includes additional
censorship obligations, including “terms and
conditions for addressing illegal hate
speech.”149 This ostensibly “voluntary” Code
has been integrated into the DSA’s regulatory
framework, meaning that platforms can be
penalized for failing to comply with it.150
Perhaps the clearest statement of the Commission’s view of the Hate Speech and
Disinformation Codes came from Renate Nikolay, Deputy Director-General of DG-Connect, in
September 2024. In a meeting of Disinformation Code signatories, Nikolay told platforms that
compliance “could represent a strong mitigation measure” under the DSA.151 Nikolay then took
it one step further, stating that refusal to join “could be taken into account . . . when determining
whether the provider is complying with the obligations laid down by the DSA.”152 A senior
European Commission official confirmed what platforms suspected all along: the censorship
commitments in the Disinformation and Hate Speech Codes were effectively mandatory.
146 2022 Strengthened Code of Practice on Disinformation, E UROPEAN COMM’ N (June 16. 2022), https://digital-
strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation.
147 Id.
148 Press Release, European Comm’n, Commission Endorses the Integration of the Voluntary Code of Practice on
Disinformation into the Digital Services Act (Feb. 12, 2025),
https://ec.europa.eu/commission/presscorner/detail/en/ip_25_505.
149 The Code of conduct on countering illegal hate speech online +, E UROPEAN COMM ’ N (Jan. 20, 2025),
https://digital-strategy.ec.europa.eu/en/library/code-conduct-countering-illegal-hate-speech-online. (hereinafter
“Hate Speech Code”).
150 Id.
151 Summary of the Roundtable on Auditing Commitments of the Code of Practice on Disinformation (Sept. 30,
2024), see Ex. 174.
152 Id.
39
Since 2015, the European Commission has envisioned a comprehensive digital
censorship law giving it control over political discourse in the modern town square. European
regulators began this campaign in 2015, with the EU Internet Forum, and extended it in 2016
with the Hate Speech Code and in 2018 with the Disinformation Code. These censorship
commitments were ostensibly voluntary, but platforms understood the European Commission’s
intent in creating them: the Hate Speech and Disinformation Codes were a precursor to what
would become the Digital Services Act, giving the European Commission expansive
enforcement powers against platforms—including the power to penalize platforms that permitted
open debate on sensitive political topics. This set up a clear incentive for platforms: comply with
the European Commission’s censorship demands even before the DSA took effect, or risk
regulatory retaliation later once the Commission could levy massive fines and even ban
platforms in the EU.153 Today, the Disinformation and Hate Speech Codes are the only safe
harbor against DSA enforcement, making them effectively mandatory for platforms trying to
avoid punitive action from the European Commission.
III. T HE E UROPEAN C OMMISSION HAS S UCCESSFULLY PRESSURED PLATFORMS TO
CHANGE T HEIR G LOBAL CONTENT MODERATION POLICIES , DIRECTLY H ARMING AMERICAN
S PEECH IN THE UNITED S TATES .
Every major online platform has rules about what content users are permitted to post.
Under various names, such as “community standards” or “community guidelines,” these content
moderation policies are important: they set the boundary for what discourse is allowed in the
modern town square.154 Using these rules, platforms moderate content at scale using artificial
intelligence and other automated tools, shaping political debate worldwide.155 Humans are an
increasingly small part of this process, meaning that for regulators seeking narrative control, the
content moderation rules are the most important pressure point. Effectively coercing platforms to
change these overarching content moderation rules results in censorship of speech around the
globe.
Nonpublic documents show that since 2020, the European Commission has regularly
pressured the world’s largest social media platforms to change their content moderation policies.
The plain text of the Hate Speech Code, the Disinformation Code, and the DSA require
companies to make changes to their policies to combat vaguely defined categories of harmful
content. Behind closed doors, the pressure has been even greater. The European Commission
153 Digital Services Act, supra note 26, Art. 36, 52.
154 See, e.g., Community Standards, M ETA, https://transparency.meta.com/policies/community-standards/ (last
visited Jan. 29, 2026); YouTube’s Community Guidelines, Y OUT UBE H ELP,
https://support.google.com/youtube/answer/9288567?hl=en (last visited Jan. 29, 2026); The X Rules, X,
https://help.x.com/en/rules-and-policies/x-rules (last visited Jan. 29, 2026); Community Guidelines, T IK T OK ,
https://www.tiktok.com/community-guidelines/en (last visited Jan. 29, 2026).
155 See Content Moderation in a New Era for AI and Automation, M ETA O VERSIGHT BOARD,
https://www.oversightboard.com/news/content-moderation-in-a-new-era-for-ai-and-automation/ (last accessed Jan.
15, 2026) (“Most content moderation decisions are now made by machines, not human beings, and this is only set to
accelerate.”)
40
repeatedly told platforms to more aggressively censor content related to the COVID-19
pandemic and the Russian invasion of Ukraine—two of the most consequential political events
of the 2020s. This campaign went all the way to the top: European Commission leaders including
Vice President for Values and Transparency Vera Jourova and Commissioner for Internal Market
Thierry Breton regularly met with platform CEOs to press for more censorship. And the pressure
has only intensified since the passage of the DSA, which gave the European Commission broad
new investigative and retaliatory powers to ensure that social media platforms complied with its
demands. Ultimately, the pressure worked: as a result of the European Commission’s years-long
pressure campaign, platforms instituted new global rules censoring, among other things, “coded
statements” that “normalize inequitable treatment” and “media presented out of context.”156 Put
simply, platforms changed their rules to censor true information in the United States because of
European Commission pressure.
A. Because content moderation policies cannot be feasibly or effectively enforced on a
country-by-country basis, they are global in scope.
Critically, platforms’ content moderation rules are—and effectively must be—global in
scope.157 It is impractical and harmful to users’ privacy for large tech companies to maintain
distinct content moderation rules in every jurisdiction across the world.158
First, in order to enforce country-by-country content moderation rules, platforms would
have to know where every single user is located. That is a clear privacy risk. Right now, users
can take steps to stop online platforms from tracking their precise location by “disabling location
sharing or GPS functionality on [their] device.”159 Some platforms, sensitive to user privacy
concerns, do not collect location data at all.160 For platforms to effectively enforce different
country-by-country content moderation rules, they would need access to each user’s precise
location every time the user logged on to the platform. This poses a clear threat to user privacy in
multiple ways. As the global censorship tide rises, governments are more aggressively requiring
platforms to produce account information for users alleged of various thought crimes. Precise
location data for these users would allow the government to surveil and target alleged thought
criminals far more easily and precisely. Moreover, platforms would likely have to store this
location data somewhere. This increases the risk of data breaches revealing location and network
data for users around the world.161
156 TikTok Community Guidelines Update Executive Summary (Mar. 20, 2024), see Ex. 8; TikTok Community
Guidelines Survey, see Ex. 15.
157 See, e.g., YouTube Community Guidelines enforcement, G OOGLE T RANSPARENCY REPORT,
https://transparencyreport.google.com/youtube-policy/removals (last visited Jan. 29, 2026) (YouTube’s Community
Guidelines are enforced consistently across the globe, regardless of where the content is uploaded. When content is
removed for violating our guidelines, it is removed globally.”); Community Guidelines, T IK T OK ,
https://www.tiktok.com/support/faq_detail?id=7543604781873371654 (last accessed Jan. 29, 2026) (“Our
Community Guidelines apply to our global community and everything shared on TikTok.”).
158 See DSA Censorship Report I, supra note 3, at 31.
159 Amy Bunn, What Do Social Media Companies Know About You?, M C A FEE (Oct. 29, 2021).
160 Rumble Inc.’s Response to an Order to Produce Records from British Columbia’s Office of Human Rights (Aug.
31, 2022), see Ex. 288.
161 See Geo-IP Blocking: A Double-Edged Sword for Network Firewall Security, H OSTOMIZE ,
https://hostomize.com/blog/geo-ip/ (last visited July 21, 2025).
41
Second, country-specific content moderation regimes are easily circumvented. VPNs
allow users to change their virtual location by connecting to servers in different countries.162
Using VPNs, users can bypass local content restrictions at minimal cost. This makes country-by-
country content moderation a practical impossibility. Indeed, Australia and Brazil have explicitly
ordered global content removals or threatened to fine users who use VPNs to access geo-blocked
content.163 Even if platforms could and did enforce different content moderation rules in each
country, they would still likely encounter pressure from regulators because VPNs would enable
users to easily sidestep any censorious regulatory regime.
Finally, the cost of country-by-country content moderation is prohibitive. It would be
significantly expensive to stand up, develop, and maintain multiple trust and safety teams to
implement and continually maintain separate content moderation policies for each nation.164 For
large platforms, this would come on top of the massive financial burden of EU regulatory
compliance, which already functions as an extraterritorial tax on American tech innovation.165
And for smaller platforms, the cost would be crippling. They would be driven out of the
marketplace or prevented from entering it in the first place, cementing the market status of the
leading online platforms.
An example of the global reach of European internet regulation is the General Data
Protection Regulation (GDPR). This European law requires pop-up ads on websites informing
users about how their data will be collected and stored.166 While the law is ostensibly targeted at
European users, these pop-ups are common, if not pervasive, in the United States.167 This is a
product of the so-called “Brussels Effect”: because it is invasive, impractical, and ultimately
ineffective to carve up the global internet with different rules for different jurisdictions,
European regulations become de facto global standards.168 EU bureaucrats know and weaponize
that fact. As The New York Times reported, the DSA’s proponents “hope[d] its effects could
extend far beyond Europe, changing company policies in the United States and elsewhere.”169
For these reasons, European pressure to change platform content moderation rules affects
speech globally, including, of course, in the United States. Categories like “misinformation” and
“hate speech” are impossible to define objectively. And worse, government inevitably uses these
vague, malleable categories of content to target its political opponents and cement its own grip
on power. Given the nature of the internet and social media, when Europe pressures social media
162 Ausra Korkuzaite, Best VPN for Geo-Blocking in 2026, CYBERNEWS (last updated July 4, 2025).
163 See e.g., Tom Crowley, ‚Silly‘ to demand global takedowns: Dutton weighs in on eSafety case, A USTRALIAN
BROADCASTING CORP. (Apr. 25, 2024); Fact Check: Brazilians Can Be Fined for Using VPN to Access X, REUTERS
(Sept. 6, 2024) (last updated Sept. 9, 2024).
164 See, e.g., Trevor Wagener, The High Cost of State-by-State Regulation of Internet Content Moderation,
D ISRUPTIVE COMPETITION PROJECT (Mar. 17, 2021).
165 See Carl Schramm, Costs to U.S. Companies from EU Digital Services Regulation, COMPUTER AND COMM ’CNS
I NDUSTY A SS’ N (July 2025) (calculating the direct cost of European regulatory compliance for U.S. technology
companies at $97.6 billion annually).
166 See Jack Schofield, What should I do about all the GDPR pop-ups on websites?, T HE G UARDIAN (July 5, 2018).
167 See Allison Schiff, Why Are So Many US Companies Using Cookie Banners On Their Websites?,
A DE XCHANGER (Apr. 17, 2023) (“Most cookie banners are designed to address EU data protection obligations.”).
168 See Dawn Carla Nunziato, The Digital Services Act and the Brussels Effect on Platform Content Moderation, 24
CHIC . J. I NT . L AW 115 (2023).
169 Steven Lee Myers, E.U. Law Sets the Stage for a Clash Over Disinformation, N.Y. T IMES (Sept. 27, 2023).
42
companies to change their content moderation rules, European bureaucrats affect what U.S. users
can see and post. That is why the Committee is conducting oversight of technology companies’
compliance with foreign censorship laws, like the DSA.
B. The plain text of the Disinformation and Hate Speech Codes require platforms to
change their content moderation rules.
The European Commission’s primary censorship initiatives over the last decade have
been the Code of Practice on Disinformation, the Code of Conduct on Countering Illegal Hate
Speech Online, and the Digital Services Act. The text of each makes clear that platforms should
change their global content moderation rules to comply, meaning that platforms are expected to
censor content deemed “disinformation” or “hate speech” by European regulators.
The Hate Speech Code requires that “signatories . . . have in place terms and conditions
informing users that they prohibit illegal hate speech on their services.”170 Effectively, this
means that platforms are expected to ensure their global content moderation rules censor speech
that would be deemed “illegal hate speech” in Europe. The Committee has previously shown that
European regulators classify conventional political discourse on immigration and other sensitive
topics as “illegal hate speech.”171 Under the plain text of the Hate Speech Code, the European
Commission expects that this content, which is protected by the First Amendment in the United
States, will be censored under platforms’ content moderation policies.
Likewise, the Code of Conduct on Disinformation requires platform signatories to “put in
place or further bolster policies to address both misinformation and disinformation across their
services.”172 In the same way, this functionally means that major social media platforms must
change their content moderation rules to censor content considered “misinformation or
disinformation” by European regulators.
C. The DSA’s text requires platforms to change their content moderation rules.
The text of the DSA also seems to require that platforms change their content moderation
rules. Under Article 34, platforms are directed to identify “systemic risks” present on their
platforms, which are defined to include “misleading or deceptive content,” “disinformation,”
“any actual or foreseeable negative effects on civic discourse and electoral processes,” and “hate
speech.”173 Platforms are specifically warned that this systemic risk may include “information
which is not illegal.”174 Then, under Article 35, platforms must mitigate certain risks, meaning
they ultimately must moderate (i.e., censor) content that European regulators deem “misleading,”
“deceptive,” or “hate[ful].”175 In order to do this at scale, platforms need to continually review
and change their content moderation policies, which are applied globally.176 Indeed, the Hate
Speech Code states that compliance with its requirement to change platform content moderation
170 Hate Speech Code, supra note 149, §1.1.
171 DSA Censorship Report I, supra note 3, at 26-29.
172 Disinformation Code, supra note 124, § IV.
173 Digital Services Act, supra note 26, at recitals 80, 84, Art. 34.
174 Id. at recital 84 (emphasis added).
175 See id. at recitals 80, 84.
176 See supra Sec. III.A.
43
rules may constitute “appropriate risk mitigation” under the DSA.177 Similarly, the
Disinformation Code states that compliance with its requirement to change platform policies
“should be considered as a possible risk mitigation measure.”178 The combination of massive
penalties179 for DSA non-compliance and a lack of clarity on how to comply180—along with the
European Commission’s pressure—means that platforms have little choice but to take refuge in
the Codes’ safe harbors.
D. The European Commission attempted to censor speech questioning prevailing
government narratives about COVID-19 and vaccines.
From the earliest days of the COVID-19 pandemic, the European Commission tried to
censor speech that questioned prevailing narratives about transmission and vaccination by
coercing platforms to change content moderation rules on COVID-19-related content. Through
the European Commission’s “COVID-19 disinformation monitoring program” and meetings
with platforms from 2020 to 2023, the European Commission pressured platforms to change
their global content moderation rules for content related to the COVID-19 pandemic, silencing
those who questioned government narratives that turned out to be entirely false.181
As early as April 2020, the European Commission was pressuring platforms to moderate
online discourse about COVID-19. In a call between the European Commission and TikTok, the
European Commission warned that “misinformation and disinformation is an important part of
the Covid crisis, will remain an essential part of Commission’s agenda, and reminded of the
importance to continue and further step up the work on addressing Corona disinformation.”182
The DSA was not in place yet, but the Disinformation Code was—and platforms knew that the
DSA would soon follow.183 The pandemic had just begun, and the European Commission was
already beginning to pressure social media platforms to change their policies.
177 Hate Speech Code, supra note 149, at Preamble (I).
178 Disinformation Code, supra note 124, at Preamble (L).
179 Digital Services Act, supra note 26, at Art. 52.
180 See TikTok DSA Risk Assessment Guidelines (Aug. 25, 2023), see Ex. 5 (“There is a high level of complexity
and subjectivity in the risk assessment process.”)
181 See e.g., Emails between TikTok staff and European Commission staff (Oct. 30, 2020), see Ex. 48; see also
Europe’s Threat to American Speech and Innovation: Hearing of the H. Comm. on the Judiciary, 119th Cong. (Sep.
3, 2025) (statement of Chairman Jordan) (“I always like to point out everything the government told us about
COVID turned out to be wrong . . . . They told us that the virus didn’t come from a lab. It sure looks like it did. They
told us it wasn’t gain-of-function research done at the lab. Yes it was. They told us it wasn’t our tax money used at
the lab. Yes it was. They told us vaccines—they told us that the vaccinated can’t get it. They told us the vaccinated
can’t transmit it. They told us masks work. They told us a six-feet social distance was based on science. They told us
this was the first virus in history where there’s no such thing as natural immunity. They were 0 for 8.”).
182 Email from European Commission staff to TikTok (April 1, 2020), see Ex. 46.
183 See infra Sec. II.C.
44
On April 1, 2020, the European Commission warned TikTok that it needed to do more to censor
so-called disinformation.
Soon, the European Commission expanded its pressure campaign publicly. In June 2020,
European Commission President von der Leyen launched a “COVID-19 disinformation
monitoring program,” requiring signatories of the Disinformation Code to provide reports—first
monthly, then bimonthly—on their efforts to “demote[] and remove[]” information that the
European Commission considered false and “promote authoritative content” parroting the
government’s preferred narratives.184 The European Commission’s letter asking TikTok to join
the “monitoring program” specified that the European Commission was interested in “policies,
procedures and actions” that platforms enacted “since the outbreak of the COVID-19 crisis as a
response to the concomitant infodemic.”185 These reporting rules were essentially censorship
mandates: platforms that did not remove enough content to placate the European Commission
would be singled out for regulatory retaliation, meaning that platforms had to change their global
content moderation rules to censor at scale content disfavored by the European Commission.
The European Commission privately warned social media platforms that they should change
their COVID-19 content moderation rules.
184 COVID-19 disinformation monitoring programme, E UROPEAN COMM ’ N, https://digital-
strategy.ec.europa.eu/en/policies/covid-19-disinformation-monitoring (last visited Jan. 29, 2026).
185 Letter from European Commission to TikTok (July 22, 2020), see Ex. 47 (emphasis in original).
45
Even European companies, generally shielded from the worst of the Commission’s
regulatory actions, felt the pressure. The same month, in June 2020, Filomena Chirico, a high-
ranking advisor to Commissioner Breton, asked to meet with Swedish company Spotify about its
“anti-disinformation actions.”186 At the same time, Chirico scheduled a “DSA discussion” with
Spotify, showing that the DSA was top of mind for platforms and the Commission almost two-
and-a-half years before it became law.187
A high-ranking advisor to Commissioner Breton wanted to meet with Spotify on its “anti-
disinformation actions” and the DSA in 2020.
By October 2020, the European Commission had zeroed in on platforms’ content
moderation rules, asking platforms how they planned to “update [their] terms of service[] or
content moderation practices” ahead of the rollout of COVID-19 vaccines.188 While staff made
the outreach, they told platforms that they were reaching out with “the knowledge of” President
von der Leyen and “the agreement of” Vice President Jourova.189 The message was clear: the
most powerful figures in the Commission expected platforms to change their global content
moderation rules related to COVID-19 vaccines before even a single vaccine had been delivered.
The platforms could not and did not ignore this type of pressure. TikTok, for example, told the
European Commission that it was “monitoring . . . satire related to vaccinations” to determine if
additional censorship was necessary.190 It is a striking parallel to the United States, where the
Biden-Harris Administration successfully “pressured” Meta to remove “humor and satire”
related to COVID-19 vaccinations.191 After the Committee’s oversight brought this to light, Meta
CEO Mark Zuckerberg apologized and promised it would never happen again.192
186 Emails between Spotify staff and European Commission staff (June 7, 2020), see Ex. 18.
187 Id.
188 Emails between TikTok staff and European Commission staff (Oct. 30, 2020), see Ex. 48.
189 Id.
190 TikTok Input to European Commission Request on Covid-19 Vaccination Disinformation (Nov. 4, 2020), see Ex.
49.
191 Letter from Mr. Mark Zuckerberg, CEO, Meta, to Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary (Aug.
26, 2024); see also STAFF OF THE H. COMM. ON THE JUDICIARY AND THE SELECT SUBCOMM . ON THE
WEAPONIZATION OF THE FED. G OV’ T OF THE H. COMM . ON THE JUDICIARY, 118 TH CONG., T HE CENSORSHIP
I NDUSTRIAL COMPLEX: H OW T OP B IDEN WHITE H OUSE O FFICIALS COERCED BIG T ECH TO CENSOR A MERICANS,
T RUE I NFORMATION, AND CRITICS OF THE BIDEN A DMINISTRATION (Comm. Print May 1, 2024).
192 Id.
46
Pressure to change content moderation rules related to COVID-19 vaccines came from the
highest levels of the European Commission.
The European Commission’s focus on alleged vaccine misinformation continued into
November 2020. In a call with one platform, a high-ranking member of Vice President Jourova’s
staff “explained that vaccines will be our new focus on disinformation on covid.”193
Accordingly, the European Commission stated that platforms would have to report on “actions
taken to fight disinformation and misinformation around COVID vaccines” in future COVID-19
monitoring reports, once again implying that platforms needed to take action—meaning adopt
new content moderation rules—to censor content about COVID-19 vaccines.194 The same email
referenced another European Commission effort to “design[] a targeted work-plan on fighting
disinformation and misinformation around COVID vaccines,” indicating that the European
Commission was in the process of forming more concrete vaccine censorship guidelines for
platforms to follow.195
193 Readout of meeting between TikTok and Staff to European Commission Vice President Vera Jourova (Nov. 6,
2020), see Ex. 51.
194 Emails between TikTok staff and European Commission staff (Nov. 9, 2020), see Ex. 50.
195 Id.
47
By November 2020, the European Commission was focused on censorship of content related to
COVID-19 vaccines.
The European Commission’s focus on COVID-19-related content moderation policies
continued into 2021. In January, the European Commission extended President von der Leyen’s
disinformation monitoring program and requested that future platform reports have “a stronger
focus” on “measures to remove and/or demote dis- or misinformation related to COVID-19
vaccines.”196 The European Commission pressured platforms on where the Commission wanted
“to see improvement[s],” and offered insight into how it defined misinformation, referring to the
biased, left-wing Global Disinformation Index (GDI) as a source to be trusted.197 The message to
platforms was clear: the Commission wanted them to censor more speech and to do so in
accordance with left-wing organizations like GDI.
196 Emails between TikTok staff and European Commission staff (Jan. 19, 2021), see Ex. 52.
197 Id. The U.S. State Department at one point also worked with the Global Disinformation Index to silence
conservative news organizations in the United States. GDI systematically rates conservative news sites as
“disinformation” while calling left-wing news outlets like The Huffington Post trustworthy. See STAFF OF THE H.
COMM . ON THE JUDICIARY, 118 TH CONG., GARM’ S H ARM : H OW THE WORLD’ S BIGGEST BRANDS SEEK TO CONTROL
O NLINE SPEECH (Comm. Print July 10, 2024); Gabe Kaminsky, Disinformation Inc: Government-backed
organization sent $315,000 to group blacklisting conservative news, WASH. FREE BEACON (Feb. 14, 2023).
48
In January 2021, the European Commission told platforms to step up their censorship.
The next month, in February 2021, European Commission Vice President Jourova met
with “Facebook, Google, TikTok, Twitter, and YouTube” and directly pressed them on their
efforts to censor vaccine-related content.198 Specifically, she “asked what platforms could do to
strengthen the actions to have less toxic content”—tethering her censorship demand not to any
objective class of unlawful content, but to her own personal and subjective opinion of what
content might be “toxic.”199 The European Commission noted its displeasure that “manifestly
toxic content” remained online even after trusted flaggers—government approved third-parties
198 Readout of meeting between European Commission Vice President Vera Jourova and multiple platforms (Feb.
22, 2021), see Ex. 53.
199 Id.
49
empowered to make priority censorship requests—requested its removal.200 It is precisely what
the First Amendment forbids U.S. government officials from doing: targeting specific speech for
censorship based on their personal belief about the speech’s truthfulness or utility.201 Yet,
because of the global nature of platform content moderation, European censorship demands
likely affected lawful American speech.
Vice President Jourova directed platforms to remove “toxic” vaccine-related content in
February 2021.
Two months later, in April, Vice President Jourova followed up, meeting with interim
TikTok CEO Vanessa Pappas “to discuss [TikTok’s] efforts around disinformation.”202 Jourova
cited the Hate Speech Code, the Disinformation Code, and the forthcoming DSA as reasons that
TikTok had a “responsibility” to “fight[] COVID-19 related disinformation.”203 It is a revealing
comment: Jourova stated that two “voluntary” Codes of Conduct and a law that would not be
200 Id.
201 See Nat’l Rifle Ass’n v. Vullo, 602 U.S. 175 (2024) (“Government officials cannot attempt to coerce private
parties in order to punish or suppress views that the government disfavors.”).
202 Emails between TikTok staff and European Commission staff (Apr. 13, 2021), see Ex. 54.
203 Readout of meeting between TikTok and European Commission Vice President Vera Jourova (Apr. 20, 2021),
see Ex. 19.
50
signed for another 18 months conveyed a censorship “responsibility” onto platforms. And
because European regulators’ “suggestions” were actually obligatory, Jourova’s pressure was
effective. Platforms knew that the European Commission would target them using the
forthcoming DSA if they did not comply with the European Commission’s censorship demands
and change their global content moderation rules.
The European Commission’s readout of an April 2021 meeting with TikTok’s interim CEO notes
TikTok’s censorship “responsibility.”
While much of the world had moved past the COVID-19 pandemic by the end of 2021,
the European Commission had not. Once again, it renewed the so-called “disinformation”
monitoring program and imposed new reporting obligations on the platforms.204 This time, the
European Commission tried to do more than just censor thought criminals—it sought to ruin
their financial livelihoods. The European Commission added a new reporting obligation for
“actions taken to demonetize purveyors of Covid-19 and vaccines related disinformation.”205
Once again, platforms left themselves exposed to regulatory reprisal if they did not change their
rules to demonetize creators who criticized the government or prevailing “authoritative”
narratives about the pandemic—many of which, by this time, had been shown conclusively to be
false.206
204 Email from European Commission staff to Code of Practice on Disinformation Signatories (Dec. 8, 2021), see
Ex. 59.
205 Id.
206 See, e.g., Anika Singanayagam et al., Community transmission and viral load kinetics of the SARS-CoV-2 delta
variant in vaccinated and unvaccinated individuals in the UK: a prospective, longitudinal, cohort study, 22 L ANCET
I NFECTIOUS D ISEASE 183 (Feb. 2022) (demonstrating that vaccinated individuals could transmit the COVID-19
virus).
51
In December 2021, the European Commission directed platforms to demonetize creators whose
content was deemed misinformation.
For years, the European Commission actively sought to shape online debate about the
COVID-19 pandemic and censor speech questioning prevailing government narratives. This
effort was conceived and executed at the highest level of the European Commission’s political
leadership. At the outset of the pandemic, the Commission warned platforms that combatting
alleged COVID-19 misinformation was “essential.”207 Then, under the Disinformation Code, it
forced platforms to report on alleged misinformation on their sites, including how platforms were
changing content moderation rules to silence voices disfavored by the government. With the
DSA was on its way, platforms were in a bind: they knew that failure to censor speech now
would be held against them later. The only way out was to change their global content
moderation rules and censor speech worldwide.
207 Email from European Commission staff to TikTok (Apr. 1, 2020), see Ex. 46.
52
E. The European Commission pressured platforms to change their rules for content
related to Russia’s invasion of Ukraine.
From the beginning of 2022, the European Commission also pressured platforms to more
aggressively censor speech related to Russia’s invasion of Ukraine. In regular meetings
throughout 2022 and 2023, the European Commission asked platforms how they had changed
their content moderation rules in response to the war and urged them to make further changes.
Evidence indicates that the Biden-Harris Administration supported and may have been involved
in these efforts.
i. The European Commission hosted regular meetings with platforms to
pressure them to censor more content related to Russia’s invasion of
Ukraine.
Throughout 2022 and 2023, the European Commission met with platforms and
encouraged them to change their rules to stop alleged misinformation and disinformation related
to the Russian invasion of Ukraine. In January 2022, before the war even began, the Commission
began hosting large group meetings with platforms, in which it regularly asked about platforms’
content moderation rules. In total, there were at least 14 of the meetings, with characteristic
meeting agendas asking platforms about “any new actions taken on your side, including changes
in Terms and Conditions”208 and “new measures/policy adjustments” regarding
“disinformation.209 The implication that platforms should take such action—as the DSA was in
the final stages of negotiation—was clear.
208 Meeting Agenda from European Commission staff to Code of Practice on Disinformation Signatories (May 11,
2022), see Ex. 73; see Email from European Commission staff to Code of Practice on Disinformation Signatories
(Mar. 30, 2022), see Ex. 67; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 3, 2022), see Ex. 77; Meeting invitation from European Commission staff to Code
of Practice on Disinformation Signatories (May 24, 2022), see Ex. 75; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (Apr. 19, 2022), see Ex. 69; Meeting invitation
from European Commission staff to Code of Practice on Disinformation Signatories (Apr. 4, 2022), see Ex. 68;
Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Mar. 10,
2022), see Ex. 66; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Feb. 16, 2022), see Ex. 63; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 7, 2022), see Ex. 62; Email from European Commission staff to Code of Practice
on Disinformation Signatories (Jan. 27, 2022), see Ex. 60; Meeting Agenda from European Commission staff to
Code of Practice on Disinformation Signatories (June 23, 2022), see Ex. 78; Meeting Agenda from European
Commission staff to Code of Practice on Disinformation Signatories (June 1, 2022), see Ex. 76; Meeting Agenda
from European Commission staff to Code of Practice on Disinformation Signatories (May 18, 2022), see Ex. 74;
Meeting Agenda from European Commission staff to Code of Practice on Disinformation Signatories (May 4,
2022), see Ex. 72; Meeting Agenda from European Commission staff to Code of Practice on Disinformation
Signatories (Apr. 27, 2022), see Ex. 71.
209 Email from European Commission staff to Code of Practice on Disinformation Signatories (Mar. 30, 2022), see
Ex. 67.
53
The recurring agenda for meetings on disinformation related to Russia’s invasion of Ukraine
explicitly asked platforms to report on changes to their content moderation rules.
The European Commission also made more specific demands of platforms. On March 3,
2022, the European Commission directed Disinformation Code signatories to report to the
Commission on how they were “adapting terms of services” to ensure “a more consistent
labelling, demotion or removal of debunked information or deceptive manipulated material.”210
Implicit in the question is the notion that platforms should be “adapting” their global content
moderation rules in a direction preferred by the European Commission. Moreover, whether
information is “debunked” or “deceptive” is a subjective judgment, increasing the risk that the
European Commission could use these classifications as a guise for censorship of content it does
not like. And “reporting” was simply the European Commission’s pressure tactic of choice—an
easy way to signal to companies that certain content should be censored and that the European
Commission would know if it was not.
210 Emails between TikTok staff and European Commission staff (Mar. 7, 2022), see Ex. 65.
54
The European Commission asked platforms to report on changes to their content moderation
rules after the Russian invasion of Ukraine.
The same day, on March 3, 2022, Vice President Jourova and Commissioner Breton
jointly sent a letter to TikTok warning that the platform needed to “exert [its] upmost
diligence . . . in applying all elements of the Code of Practice on Disinformation” and “start
showing how the concrete commitments” it made “[could] be operationalized rapidly.”211
Jourova and Breton also told TikTok that it should “adapt[] [its] policies to the changed situation,
in anticipation of the risk-based approach which is at the core of the upcoming Digital Services
Act.”212 While in this case, Jourova and Breton were specifically discussing content moderation
rules related to “war propaganda,”213 the broader principle is troubling—and telling. Six months
before the DSA took effect, the European Commission believed—and told platforms—that the
DSA empowered them to demand changes to global content moderation rules.
A March 2022 letter from the European Commission to TikTok claimed the authority to force
changes to global content moderation rules under the DSA.
Just a couple weeks later, a March 14 meeting between platforms and the European
Commission on disinformation focused on “any changes to terms of services relevant not to
allow war propaganda and problematic content.”214 While the desire to combat Russian war
propaganda is understandable, the expansion of scope to undefined “problematic content”—
another hopelessly subjective standard—and the belief that European regulators should be able to
211 Letter from Commissioner Thierry Breton and Vice President Vera Jourova to TikTok (Mar. 3, 2022), see Ex. 64.
212 Id.
213 Id.
214 Emails between TikTok staff and European Commission staff (Mar. 14, 2022), see Ex. 21.
55
influence the global content moderation standards of non-European companies are both deeply
concerning.
European regulators asked for updates on platforms’ “changes to terms of services” to censor
“problematic content.”
Like it had on COVID-19 misinformation, the European Commission once again relied
on pseudoscience to substantiate its claim that disinformation related to Russia’s invasion of
Ukraine was prevalent on social media platforms. Later in March 2022, the European
Commission summoned TikTok to a meeting after the left-wing pro-censorship group
NewsGuard reported that misinformation related to the Russian invasion of Ukraine was easily
accessible on the platform.215 NewsGuard, like the Global Disinformation Index, systematically
designates conservative speech as “disinformation” while claiming that progressive speech is
trustworthy.216 TikTok disputed the contents of NewsGuard’s report, stating that the experiment
did not “mimic[] standard viewing behavior.”217
Formal meetings of signatories to the Disinformation Code, including platforms, also
focused on platform content moderation rules related to the Russian invasion and the resulting
war. These rules were discussed in at least ten meetings of the Crisis Response Subgroup of the
Disinformation Code Task Force in 2022 and 2023.218 Similarly, at a “plenary session” of Code
215 Emails between TikTok staff and European Commission staff (Mar. 24, 2022), see Ex. 29.
216 See STAFF OF THE H. COMM. ON THE JUDICIARY, 118 TH CONG., GARM’ S H ARM : H OW THE WORLD’ S BIGGEST
BRANDS SEEK TO CONTROL ONLINE SPEECH (Comm. Print July 10, 2024).
217 Emails between TikTok staff and European Commission staff (Mar. 24, 2022), see Ex. 29.
218 Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Jan. 11, 2023), see Ex. 195; see
Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Feb. 22, 2023), see Ex. 193; Meeting
invitation from European Commission staff to Code of Practice on Disinformation Signatories (Feb. 8, 2023), see
Ex. 107; Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Dec. 14, 2022), see Ex. 196;
Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Oct. 27,
2023), see Ex. 143; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Mar. 8, 2023), see Ex. 111; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Nov. 30, 2022), see Ex. 93; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (Nov. 15, 2022), see Ex. 91; Agenda: Meeting of the Permanent
56
signatories in December 2022, platforms were asked “what measures [they took] to reduce
disinformation on the [war in Ukraine] crisis” in a “spotlight discussion.”219 Top staff to Vice
President Jourova and Commissioner Breton attended and spoke at the plenary.220
The agenda for a December 2022 plenary session of Disinformation Code signatories noted that
platforms would be asked about their efforts to stop disinformation related to Russia’s invasion
of Ukraine.
High-level European Commission officials also continued to meet with platforms. In
September 2022, Vice President Jourova met with Spotify CEO Daniel Ek to “discuss the roles
and responsibilities of digital platforms within the context of the Digital Services Act (DSA).”221
Once again, she used a meeting with a platform executive to call for censorship, “encouraging”
Spotify to “join[] the Code of Practice on Disinformation”—meaning it would have to change its
content moderation rules to censor more content.222 Jourova also confirmed during the meeting
that the DSA represented the EU’s attempt “to engage in the field of information space”—in
other words, to tilt the marketplace of ideas towards those favorable to the European
Commission.223
Task-Force Crisis Response Subgroup (Oct. 19, 2022), see Ex. 202; Email from European Commission staff to Code
of Practice on Disinformation Signatories (Oct. 4, 2022), see Ex. 88.
219 Agenda: Fourth Meeting of the Code of Practice’s Permanent Task-Force (Dec. 6, 2022), see Ex. 95.
220 Id.
221 Readout of meeting between Spotify and European Commission Vice President Vera Jourova (Sept. 7, 2022), see
Ex. 22.
222 Id.
223 Id.
57
European Commission Vice President Jourova encouraged Spotify to join the Disinformation
Code, bringing with it censorship obligations.
In December 2022, Vice President Jourova and Commissioner Breton sent another letter
in the context of the Russian invasion urging TikTok to moderate alleged disinformation on the
platform.224 Shortly after the letter, in January 2023, Jourova and Breton separately met with
TikTok CEO Shou Chew. Jourova “asked about . . . spread of Russian disinformation,” in
response to which Chew “elaborated on the TikTok investment in the content moderation
practices that aim to limit the effect of hate speech and other ‘toxic content.’”225 The allusion to
“toxic content” harkened back to Jourova’s 2021 meeting with TikTok, indicating that TikTok
224 Letter from Commissioner Thierry Breton and Vice President Vera Jourova to TikTok (Dec. 22, 2022), see Ex.
102.
225 Readout of meeting between TikTok and European Commission Vice President Vera Jourova (Jan. 10, 2023), see
Ex. 25.
58
acted on her demand to censor such content.226 Chew and Breton also discussed TikTok’s “DSA
implementation plans” and “commitments and work under the EU Code of Practice on
Disinformation.”227
TikTok touted its investment in censoring “toxic content,” which Vice President Jourova had
urged it to do in prior meetings.
Meetings between the European Commission and platforms on misinformation related to
Russia’s invasion of Ukraine raise exactly the same problem as the COVID-19 meetings:
requirements to report to the Commission on content moderation actions, paired with the
Commission’s enormous regulatory power, effectively force platforms to change their global
content moderation rules. In this case, the European Commission repeatedly asked platforms
about global content moderation rules related to the Russian invasion of Ukraine. The platforms
knew what the European Commission wanted—more aggressive global rules—and they knew
they would have to report to the European Commission on whether they made these changes.
Finally, they knew that beginning in 2023, the European Commission would have the power to
issue company-altering fines for violations of the DSA’s exceedingly vague risk mitigation
provision. The combination of these three facts gave platforms little choice but to censor more
speech.
ii. The European Commission may have collaborated with the Biden-Harris
Administration to censor speech related to the Russian invasion of
Ukraine.
The Commission may not have acted alone. Evidence indicates that the Biden-Harris
Administration played a supporting role in the Commission’s efforts to censor global speech
226 See Readout of meeting between European Commission Vice President Vera Jourova and multiple platforms
(Feb. 22, 2021), see Ex. 53.
227 Emails between TikTok staff and European Commission staff (Jan. 4, 2023), see Ex. 24.
59
about the Russian invasion of Ukraine. In April 2022, the U.S. and the EU signed a joint
“Declaration for the Future of the Internet,” urging global internet regulation and naming “the
spread of disinformation” as a key problem in the online sphere.228 In November, the
Commission hosted a conference on the Declaration, featuring remarks by then-U.S. National
Security Advisor Jake Sullivan.229 Unsurprisingly, EU officials used the conference to call for
aggressive censorship of content related to the Russian invasion of Ukraine. In the conference’s
closing address, Vice President Vera Jourova “made a strong call . . . for platforms to step up
their action and measures to address disinformation on Ukraine in Central and Eastern
Europe.”230
Following the conference and Jourova’s speech, the European Commission asked
platforms to confidentially submit information on “what concrete actions they are planning to
take to improve their measures to reduce disinformation on Ukraine in Central and Eastern
Europe.”231 YouTube responded to the Commission that it “expanded” its global “major violent
events policy to cover content denying, minimizing or trivializing Russia’s invasion in Ukraine”
and “removed more than 80,000 videos and 9,000 channels” for violating YouTube’s content
moderation rules.232 While content denying the Russian invasion of Ukraine may have been
objectively false, once again, capacious terms like “trivialize” indicate that platforms censored
political speech in response to pressure from the European Commission—and potentially the
Biden-Harris Administration.
A conference highlighting a joint internet regulation initiative between the Biden-Harris
Administration and the EU spurred new censorship obligations for platforms.
At best, the Biden-Harris Administration sought to undermine American sovereignty by
pushing for global internet regulation. At worst, it used the European Commission to do what the
228 U.S. D EP’ T OF STATE , A D ECLARATION FOR THE FUTURE OF THE I NTERNET (Apr. 28, 2022).
229 High-level multi-stakeholder event on the Future of the Internet, E UROPEAN COMM’ N (Nov. 2, 2022),
https://digital-strategy.ec.europa.eu/en/events/high-level-multi-stakeholder-event-future-internet.
230 Emails between TikTok staff and European Commission staff (Dec. 14, 2022), see Ex. 100 (emphasis omitted).
231 Id.
232 Crisis Response Subgroup: Written input on planned actions to reduce Ukraine related disinformation in Central
and Eastern Europe (Nov. 28, 2022), see Ex. 92.
60
First Amendment prohibits: pressuring platforms to change their content moderation rules and
censor Americans.
In any event, the European Commission likely did not need the Biden-Harris
Administration’s help. Throughout 2022 and 2023, in at least 14 meetings, the Commission
asked platforms how they had or were planning to change their global content moderation rules
to censor more content related to the Russian invasion of Ukraine.233
F. Regulators and NGOs frequently pressured platforms to change rules about
moderation of misinformation and disinformation in meetings of Disinformation
Code signatories.
In 2022, platforms made additional censorship commitments to the European
Commission under a “strengthened” Disinformation Code.234 These new commitments included
participating in regular meetings of a Disinformation Code Task Force, which would be
responsible for monitoring platforms’ compliance and offer a forum for European regulators and
NGOs to interact with platforms.235 In these meetings—at least 94 between 2022 and 2024—
European regulators had the opportunity to solicit information about platforms’ misinformation
policy changes, implicitly encouraging them to censor more content during the key period when
the DSA was coming into force.
The task force’s work was done primarily in topic-focused subgroups. These included
subgroups on “crisis response,” “integrity of services,” “ad scrutiny,” “fact-checking,”
“generative AI,” and “monitoring and reporting,” as well as a “working group on elections.” In
each of these subgroups, platform representatives met with European Commission regulators and
interested NGOs, offering a regular forum for censorship demands.
Although the Disinformation Code is billed as “voluntary,” these meetings were not. In
one internal Alphabet email chain, employees noted that the company “[did not] really have a
choice” whether to participate—it was effectively mandatory.236 And the meetings occurred
under the watchful eye of the European Commission. Platforms noted internally that the
European Commission had “strong” input on each group’s agenda and “heavily pressed” its
favored initiatives in the meetings.237
233 See supra Sec. III.E.i.
234 See The 2022 Code of Practice on Disinformation, E UROPEAN COMM ’ N (last accessed Jan. 29, 2026),
https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation.
235 Disinformation Code, supra note 124, § IX.
236 Internal emails among Google staff (June 22, 2023), see Ex. 2.
237 Id.
61
Google staff noted that participation in Disinformation Code subgroup meetings was effectively
mandatory, and that the Commission retained significant control over the agenda and group
initiatives.
i. The European Commission regularly inquired about platforms’
censorship activities during meetings of the Disinformation Code Crisis
Response Subgroup.
The most prolific Disinformation Code subgroup was the Crisis Response Subgroup,
intended to be a forum for platforms to discuss how they were handling misinformation and
disinformation related to large-scale crisis events. Agendas from these meetings, however, show
that they were actually a forum for the European Commission and ideologically-aligned NGOs
to harass platforms—including Facebook, Microsoft, TikTok, Twitter, and Google238—and tell
them to change their content moderation rules.
238 See, e.g., Email from European Commission staff to Code of Practice on Disinformation Signatories (Aug. 10,
2022), see Ex. 81.
62
Between 2022 and 2024, the Crisis Response Subgroup met at least 30 times, with at
least 13 of those meetings touching on platform content moderation rules.239 Most often, the
Commission inquired about content moderation rules related to COVID-19 and the Russian
invasion of Ukraine. A characteristic meeting agenda asked platforms to share “new
developments and actions related to fighting disinformation,” specifically inquiring about
239 Email from European Commission staff to Code of Practice on Disinformation Signatories (June 27, 2022), see
Ex. 79; Agenda: Second Meeting of the Code of Practice’s Permanent Task-Force (July 5, 2022), see Ex. 80; Email
from European Commission staff to Code of Practice on Disinformation Signatories (Aug. 10, 2022), see Ex. 81;
Agenda: First Meeting of the Permanent Task-Force Crisis Response Subgroup (Aug. 10, 2022), see Ex. 189;
Meeting invitation from European Commission Staff to Code of Practice on Disinformation Signatories (Aug. 24,
2022), see Ex. 82; Email from European Commission staff to Code of Practice on Disinformation Signatories (Sep.
1, 2022), see Ex. 83; Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Sep. 7, 2022), see
Ex. 191; Email from European Commission staff to Code of Practice on Disinformation Signatories (Sep. 20, 2022),
see Ex. 87; Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Sep. 21, 2022), see Ex. 190;
Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Oct. 5, 2022), see Ex. 203; Meeting
invitation from European Commission staff to Code of Practice on Disinformation Signatories (Oct. 19, 2022), see
Ex. 89; Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Oct. 19, 2022), see Ex. 202;
Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Nov. 15,
2022), see Ex. 91, Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Nov. 30, 2022), see Ex. 93; Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup
(Nov. 30, 2022), see Ex. 201; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Dec. 8, 2022), see Ex. 97; Meeting invitation from European Commission staff to Code
of Practice on Disinformation Signatories (Dec. 14, 2022), see Ex. 101; Agenda: Meeting of the Permanent Task-
Force Crisis Response Subgroup (Dec. 14, 2022), see Ex. 196; Meeting invitation from European Commission staff
to Code of Practice on Disinformation Signatories (Jan. 11, 2023), see Ex. 103; Agenda: Meeting of the Permanent
Task-Force Crisis Response Subgroup (Jan. 11, 2023), see Ex. 195; Meeting invitation from European Commission
staff to Code of Practice on Disinformation Signatories (Feb. 8, 2023), see Ex. 107, Agenda: Meeting of the
Permanent Task-Force Crisis Response Subgroup (Feb. 8, 2023), see Ex. 194; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (Feb. 22, 2023), see Ex. 110; Agenda: Meeting
of the Permanent Task-Force Crisis Response Subgroup (Feb. 22, 2023), see Ex. 193; Meeting invitation from
European Commission staff to Code of Practice on Disinformation Signatories (Mar. 8, 2023), see Ex. 111; Agenda:
Meeting of the Permanent Task-Force Crisis Response Subgroup (Mar. 8, 2023), see Ex. 199; Meeting invitation
from European Commission staff to Code of Practice on Disinformation Signatories (Mar. 24, 2023), see Ex. 113;
Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Apr. 19,
2023), see Ex. 116, Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Apr. 19, 2023), see
Ex. 192; Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories
(May 16, 2023), see Ex. 121; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 16, 2023), see Ex. 120; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (June 23, 2023), see Ex. 125; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (Oct. 9, 2023), see Ex. 140; Meeting invitation
from European Commission staff to Code of Practice on Disinformation Signatories (Oct. 27, 2023), see Ex. 143;
Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Jan. 24,
2024), see Ex. 150; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Apr. 17, 2024), see Ex. 156; Meeting invitation from European Commission staff to Code of Practice
on Disinformation Signatories (May 15, 2024), see Ex. 158; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (June 12, 2024), see Ex. 165; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (July 10, 2024), see Ex. 167, Meeting invitation
from European Commission staff to Code of Practice on Disinformation Signatories (Aug. 7, 2024), see Ex. 169;
Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Sept. 4,
2024), see Ex. 171; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Oct. 2, 2024), see Ex. 176.
63
“policy changes”—ensuring that the Commission knew in real-time whether platforms were
responding to its censorship inquiries.240
A characteristic agenda for meetings between the European Commission, platforms, and NGOs
where the Commission applied pressure to change content moderation policies.
Sometimes, the Commission made additional demands of the subgroup. In one December
2022 meeting, after the updated Disinformation Code had been enacted, the group sought to
decide how “crisis reporting” would work under the new Disinformation Code.241 Platforms
240 Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Dec. 14, 2022) (emphasis added), see
Ex. 196.
241 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Dec. 8,
2022), see Ex. 97.
64
wanted to report only on “policy-violative” misinformation—content that was removed for
breaking their content moderation guidelines (which were also subject to Commission
pressure).242 The Commission’s proposed edits to the draft Disinformation Code indicate that it
wanted more—information on all alleged misinformation related to COVID-19 or the Russian
invasion of Ukraine, whether it broke platform rules or not.243 The Commission’s aim was clear:
to establish which platforms were not censoring enough so it could pressure them to change their
global content moderation policies using the recently-passed DSA.
The European Commission’s edits to the draft Disinformation Code, in blue, expanded the scope
of misinformation reporting from “policy-violative” misinformation to all misinformation.
European Commission regulators also frequently made election-related demands during
meetings of the Crisis Response subgroup. After the Slovak parliamentary elections in
September 2023, the Commission asked platforms to report on “actions taken to mitigate”
alleged “disinformation” during the election.244And in January 2024, six months before the
European Parliament elections, the Commission notified platforms that it would “put a strong
emphasis on . . . your updates on the state of play of your preparations for the EP elections” over
the coming months—meaning that it wanted frequent updates about additional censorship
measures ahead of the EU elections.245
The Crisis Response Subgroup was one of the key mechanisms by which the
Commission kept platforms under a close, watchful eye. In at least 29 meetings across three
years, on topics including COVID-19, the Russian invasion of Ukraine, and key European
elections, the European Commission repeatedly asked platforms about how they were censoring
alleged misinformation and disinformation—implying that the platforms needed to change their
“policies” that apply worldwide to do so.
242 Draft form for Reporting on the service’s response during the of the [COVID/Ukraine] crisis, see Ex. 200.
243 Id.
244 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Oct. 9,
2023), see Ex. 140.
245 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Jan. 24,
2024), see Ex. 150.
65
ii. Other subgroups offered additional avenues for the Commission to make
censorship demands of companies.
The Crisis Response Subgroup may have been the most prolific subgroup working under
the auspices of the Disinformation Code Task Force, but it was far from the only one. European
Commission regulators appear to have encouraged censorship in meetings of at least six other
subgroups. They are briefly discussed here.
- Integrity of Services Subgroup. “Integrity of Services” is the Commission’s
euphemism for ‘services that only allow favored content.’246 This subgroup met at
least five times in 2022 and 2023.247
246 See Disinformation Code, supra note 124, § 4. (“Relevant Signatories recognise the importance of intensifying
and demonstrating the effectiveness of efforts to ensure the integrity of services by implementing and promoting
safeguards against both misinformation and disinformation.”)
247 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Dec. 13,
2022), see Ex. 99; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Feb. 16, 2023), see Ex. 108; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 27, 2023), see Ex. 109; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (Oct. 2, 2023), see Ex. 134; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (Oct. 13, 2023), see Ex. 141.
66 - Ad Scrutiny Subgroup. This subgroup sought to stop ad dollars from flowing to
websites alleged to host “disinformation”248—most often, conservative news outlets.
This subgroup was led by organizations at the center of a similar American effort,
including the Global Alliance for Responsible Media (GARM) and NewsGuard,
indicating that the Commission sought to organize a similar campaign in Europe.249
The Committee’s oversight has previously found that GARM colluded with foreign
regulators to pressure Twitter to censor more speech in exchange for additional
advertising dollars.250 GARM’s co-founder, Robert Rakowitz “stated that silencing
President Trump was his ‘main thing,’ likening the President’s rhetoric to a
‘contagion.’”251 Following the Committee’s oversight, GARM ceased operations in
August 2024.252 The Ad Scrutiny Subgroup met at least eight times in 2022 and
2023,253 and sought to require platforms to report to the Commission on the volume
of ad dollars that platforms blocked from flowing to alleged disinformation sites.254 - Fact-Checking Subgroup. This subgroup was intended to “empower” fact-
checkers—most often, censorious left-wing NGOs—and encourage platforms to
listen to them.255 Internal materials indicate that the group also sought to create a
“repository of fact-checks,” meaning a database of Commission-approved narratives
about leading political and cultural events.256 Platforms would then be expected to
cross-reference against this database to determine what needed to be censored under
the DSA, and what did not. During these meetings, Commission regulators and NGOs
248 New SLI on demonetisation efforts, capturing the financial value (Euros) of actions taken, see Ex. 188.
249 See STAFF OF THE H. COMM. ON THE JUDICIARY, 118 TH CONG., GARM’ S H ARM : H OW THE WORLD’ S BIGGEST
BRANDS SEEK TO CONTROL ONLINE SPEECH (Comm. Print July 10, 2024); STAFF OF THE H. COMM . ON THE
JUDICIARY, 119 TH CONG., E XPORTING CENSORSHIP: H OW GARM’ S A DVERTISING CARTEL H ELPED CORPORATIONS
COLLUDE WITH FOREIGN G OVERNMENTS TO SILENCE AMERICAN SPEECH (Comm. Print June 27, 2025).
250 STAFF OF THE H. COMM . ON THE JUDICIARY, 119 TH CONG., E XPORTING CENSORSHIP: H OW GARM’ S
A DVERTISING C ARTEL H ELPED CORPORATIONS C OLLUDE WITH FOREIGN G OVERNMENTS TO SILENCE A MERICAN
SPEECH at 2 (Comm. Print June 27, 2025).
251 Id. at 2.
252 WFA Discontinues GARM, W ORLD FEDERATION OF A DVERTISERS (Aug. 9, 2024).
253 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Sep. 2,
2022), see Ex. 84; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Sep. 19, 2022), see Ex. 86; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Jan. 30, 2023), see Ex. 104; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (Feb. 13, 2023), see Ex. 105; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (May 30, 2023), see Ex. 106; Meeting
invitation from European Commission staff to Code of Practice on Disinformation Signatories (June 13, 2023), see
Ex. 124; Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories
(July 3, 2023), see Ex. 127; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 2, 2023), see Ex. 137.
254 New SLI on demonetisation efforts, capturing the financial value (Euros) of actions taken, see Ex. 188.
255 Email from European Commission staff to Code of Practice on Disinformation Signatories (Mar. 27, 2023), see
Ex. 114.
256 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Mar. 30,
2023), see Ex. 115.
67
also evaluated data related to platforms’ use of fact-checkers.257 The Fact-Checking
Subgroup met at least 18 times in 2023 and 2024.258 - Generative AI Subgroup. This group met at least three times in late 2023 and early
2024 to discuss platforms’ approach to AI-generated content.259 The Commission has
consistently called for aggressive censorship of this content,260 and subgroup
members included Logically.AI, a British firm that has used AI tools to target social
media content for censorship on behalf of governments around the world.261 - Working Group on Elections. In addition to the Crisis Response Subgroup’s
election focus, the Commission created an additional group of Disinformation Code
signatories where platforms and regulators could discuss elections exclusively. With
the Commission’s express backing, fact-checkers asked platforms about “what
measures they [were] putting in place” ahead of certain European elections,
specifically inquiring about “pre-bunking content.”262 The Commission’s election
activities were far from even-handed.263 This was likely a key touch-point for the
Commission to share its censorship expectations with platforms ahead of elections.
The Elections Working Group met at least eight times in 2023.264
257 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Nov. 6,
2023), see Ex. 145.
258 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Mar. 30,
2023), see Ex. 115; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Apr. 26, 2023), see Ex. 117; Meeting invitation from European Commission staff to Code of Practice
on Disinformation Signatories (May 10, 2023), see Ex. 118; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (May 5, 2023), see Ex. 119; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (June 6, 2023), see Ex. 123; Meeting invitation
from European Commission staff to Code of Practice on Disinformation Signatories (June 28, 2023), see Ex. 126;
Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (July 13,
2023), see Ex. 128; Meeting invitation from European Commission staff to Code of Practice on Disinformation
Signatories (Sep. 6, 2023), see Ex. 133; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 22, 2023), see Ex. 135; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (Oct. 3, 2023), see Ex. 138; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (Oct. 18, 2023), see Ex. 142; Meeting
invitation from European Commission staff to Code of Practice on Disinformation Signatories (Nov. 6, 2023), see
Ex. 145; Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories
(Nov. 6, 2023), see Ex. 144.
259 Email from European Commission staff to Code of Practice on Disinformation Signatories (July 25, 2023), see
Ex. 129; Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories
(Nov. 17, 2023), see Ex. 146; Email from European Commission staff to Code of Practice on Disinformation
Signatories (Dec. 18, 2023), see Ex. 149.
260 See infra Sec. III.G.ii.
261 See Lee Fang, Logically.AI of Britain and the Expanding Global Reach of Censorship,
REAL CLEAR I NVESTIGATIONS (Jan. 25, 2024).
262 Emails from European Commission staff to members of the Working Group on Elections (Sep. 1, 2023), see Ex.
220.
263 See infra Sec. V.B.
264 Meeting invitation from European Commission staff to the members of the Working Group on Elections (Apr.
27, 2023), see Ex. 212; Meeting agenda from European Commission staff to members of the Working Group on
Elections (June 22, 2023), see Ex. 213; Meeting invitation from European Commission staff to the members of the
Working Group on Elections (July 18, 2023), see Ex. 215; Meeting invitation from European Commission staff to
68 - Steering Committee of the Crisis Response Subgroup and Working Group on
Elections. This group included select members of Crisis Response Subgroup and the
Working Group on Elections and developed a “risk assessment methodology and a
rapid response system for crisis situations.”265 “Risk assessment methodology”
appears to refer to a system of best practices for DSA compliance, while Rapid
Response Systems allowed certain government-approved third parties to make fast-
track censorship requests to platforms ahead of major events, including elections
around Europe.266 This group met at least seven times in late 2023.267
The Commission also hosted periodic “plenary sessions,” in which subgroups reported
their work out to all signatories of the Disinformation Code and discussed the Disinformation
Code’s formal incorporation as a Code of Practice under the DSA.268 These events featured
remarks from top Commission officials, including Vice President Vera Jourova and
Commissioner Thierry Breton. Disinformation Code signatories met in plenary at least 15 times
between 2022 and 2025.269
the members of the Working Group on Elections (Sep. 5, 2023), see Ex. 216; Meeting invitation from European
Commission staff to the members of the Working Group on Elections (Sep. 15, 2023), see Ex. 217; Meeting
invitation from European Commission staff to the members of the Working Group on Elections (Sep. 20, 2023), see
Ex. 223; Meeting invitation from European Commission staff to the members of the Working Group on Elections
(Nov. 30, 2023), see Ex. 233; Meeting invitation from European Commission staff to the members of the Working
Group on Elections (Dec. 20, 2023), see Ex. 235.
265 Email from European Commission staff to Code of Practice on Disinformation Signatories (Sep. 6, 2023), see
Ex. 132.
266 See infra Sec. V.B.
267 Meeting invitation from European Commission staff to the Crisis/Elections Steering Committee (Sep. 25, 2023),
see Ex. 225; Meeting invitation from European Commission staff to the Crisis/Elections Steering Committee (Oct.
6, 2023), see Ex. 226; Meeting invitation from European Commission staff to the Crisis/Elections Steering
Committee (Oct. 13, 2023), see Ex. 227; Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Oct. 23, 2023), see Ex. 228; Meeting invitation from European Commission staff to the
Crisis/Elections Steering Committee (Oct. 27, 2023), see Ex. 229; Meeting invitation from European Commission
staff to the Crisis/Elections Steering Committee (Nov. 14, 2023), see Ex. 232; Meeting invitation from European
Commission staff to the Crisis/Elections Steering Committee (Nov. 30, 2023), see Ex. 234.
268 Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories (Sep. 14,
2022), see Ex. 85; Email from European Commission staff to Code of Practice on Disinformation Signatories (Dec.
2, 2022), see Ex. 94; Agenda: Fifth plenary meeting of the Code of Practice’s permanent Task-Force (June 5, 2023),
see Ex. 122; Emails between European Commission staff and TikTok staff (May 23, 2024), see Ex. 160; Meeting
invitation from European Commission staff to Code of Practice on Disinformation Signatories (May 23, 2024), see
Ex. 161; Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories
(June 6, 2024), see Ex. 164; Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 21, 2024), see Ex. 246; Meeting invitation from European Commission staff to
Code of Practice on Disinformation Signatories (Sep. 16, 2024), see Ex. 173; Agenda: Eighth Plenary Meeting of
the Code of Practice’s Permanent Task-Force (Oct. 1, 2024), see Ex. 254; Meeting invitation from European
Commission staff to Code of Practice on Disinformation Signatories (Dec. 13, 2024), see Ex. 180; Meeting
invitation from European Commission staff to Code of Practice on Disinformation Signatories (Dec. 16, 2024), see
Ex. 182; Meeting invitation from European Commission staff to Code of Practice on Disinformation Signatories
(Dec. 16, 2024), see Ex. 187.
269 Id.
69
iii. The Commission monitored platforms’ participation in these groups.
The sheer number of meetings imposed significant burdens on platform personnel. On
top of the onerous obligations of DSA and Disinformation Code compliance, staff now spent
hours being lectured by European Commission regulators and censorious NGOs. One Microsoft
staffer wrote to the Commission in July 2023, noting that the meetings created a “tremendous
workload” for Disinformation Code signatories.270
A Microsoft employee described the workload associated with meetings of the Disinformation
Code Task Force as “tremendous.”
The European Commission would even sternly warn platforms if they did not adequately
engage in these “voluntary” commitments. On November 29, 2023, the Commission emailed
representatives from Google, Meta, Microsoft, and TikTok saying that it “regretted to see a
substantial drop in engagement” from platforms during Crisis Response Subgroup meetings.271
Implicit in the message was that platform staff should not take a step back from these ostensibly
voluntary meetings. These Disinformation Code meetings were critical touchpoints for the
European Commission to convey censorship demands to companies, so the Commission warned
them that it was keeping track of who was—and more importantly, who was not—showing up.
270 Emails among Commission staff and Code of Practice on Disinformation Signatories (July 20, 2023), see Ex.
131.
271 Email from European Commission staff to Code of Practice on Disinformation Signatories (Nov. 29, 2023), see
Ex. 147.
70
The European Commission warned platforms that it was monitoring who was showing up to
meetings of Disinformation Code signatories.
G. Since the passage of the DSA, European Commission regulators have regularly
pressured platforms to change their global content moderation rules.
The DSA was signed into law in October 2022 and became enforceable in August 2023.
Platforms had known the DSA was coming since 2020 and acted accordingly. Nonetheless, the
DSA cemented the European Commission’s global censorship campaign, granting it sweeping
new enforcement powers that could—and would—be used to force non-EU platforms to change
their global content moderation policies and censor content disfavored by European bureaucrats.
i. Before the DSA took effect, the European Commission told platforms that
changes to content moderation rules would be a key DSA “risk
mitigation.”
In the pre-enforcement period between October 2022 and August 2023, the European
Commission engaged extensively with platforms about the obligations they faced under the
DSA. Unsurprisingly, the European Commission had a clear message: platforms would need to
change their global content moderation rules to comply with Europe’s sweeping new digital
censorship law. In fact, it began even before the DSA was officially signed. At a conference
71
attended by regulators and platforms in September 2022, the European Commission presented on
the “Digital Services Act & Algorithmic amplification.”272 The Commission’s presentation
stated that risk mitigation—a requirement under Article 35 of the DSA—included “adapting
content moderation or recommender systems” and “cooperating through Codes of Conduct and
Crisis Protocols.”273 The conference further confirmed that platforms would need to change their
content moderation rules to comply with the DSA and that the Hate Speech and Disinformation
Codes were not truly voluntary.
The European Commission’s presentation to platforms noted that DSA risk mitigation included
changes to “content moderation . . . systems” and compliance with Codes of Conduct.
The European Commission also met individually with platforms in the fall of 2022. In
September, the Commission met with TikTok for a baseline discussion on how it “deal[t] with
transparency and content moderation more broadly”—clear evidence that the Commission
considered content moderation in bounds for regulation under the DSA.274 In November, the
Commission met with TikTok again—this time to discuss its “approach” to DSA “risk
assessment.”275
Finally, shortly before the DSA took effect in August 2023, the Commission sent
platforms a document detailing its “expectations” for DSA risk assessment and mitigation. The
document noted that risk assessment should focus on “content moderation systems and
applicable terms and conditions and their enforcement.”276 Under the DSA, platforms are
required to “mitigat[e]” risks that they assess.277 Risks posed by “content moderation systems
272 European Commission Slide Deck: Digital Services Act & Algorithmic amplification (Sep. 29, 2022), see Ex. 30.
273 Id.
274 Emails between TikTok staff and European Commission staff (Nov. 9, 2022), see Ex. 23.
275 Id.
276 Annex: Information on the Risk Assessment Reports (Aug. 11, 2023), see Ex. 289.
277 Digital Services Act, supra note 26, art. 35.
72
and applicable terms and conditions”278 had to be mitigated by altering the systems and the
terms. It was the clearest evidence yet that the Commission intended to use the DSA to regulate
platforms’ global content moderation rules.
ii. Formal requests for information (RFIs) under the DSA inquire about
platforms’ content moderation rules and imply that changes may be
required for compliance.
Enforcement of the DSA began rapidly after it came into force. Within weeks, the
Commission opened investigations into platforms’ moderation of “hate speech,” “illegal
content,” and election-related content.279 Formal documents opening DSA investigations, known
as “requests for information” (RFIs), shine a light on what type of content the Commission
wanted platforms to moderate. Unsurprisingly, the Commission directed platforms to change
their global content moderation rules to censor content it disfavored.
After Hamas’s brutal terrorist attack on Israel on October 7, 2023, the Commission issued
an RFI to TikTok on October 19, asking it “questions related to . . . applicable terms and
conditions and their enforcement,” including whether “TikTok put in place any additional
measures . . . to mitigate the risks related to wide dissemination of violent behaviors and hate
speech in the context of the Hamas-Israeli conflict.”280 Unlike prior requests, the DSA was in
full effect now and the threat of retaliation was real. No longer were DSA proceedings and fines
reaching six percent of global revenue a potential future threat.281 If the Commission did not
approve of TikTok’s answers to these specific questions and if the Commission did not approve
of the “measures” that TikTok adopted, then the Commission could begin the process of levying
massive financial penalties. Here, the European Commission, in one of its first RFIs under the
DSA, not-so-subtly urged TikTok to take additional censorship “measures,” which could take the
form of global content moderation rule changes.282
278 Annex: Information on the Risk Assessment Reports (Aug. 11, 2023), see Ex. 289.
279 See Daily News 19/10/2023, E UROPEAN COMM’ N (Oct. 19, 2023),
https://ec.europa.eu/commission/presscorner/detail/en/mex_23_5145.
280 Commission RFI to TikTok (Oct. 19, 2023), see Ex. 290; TikTok Response to Commission RFI (Nov. 4, 2023),
see Ex. 6; TikTok Response to Commission RFI (Nov. 17, 2023), see Ex. 7.
281 Digital Services Act, supra note 26, art. 52.
282 Commission RFI to TikTok (Oct. 19, 2023), see Ex. 290; TikTok Response to Commission RFI (Nov. 4, 2023),
see Ex. 6; TikTok Response to Commission RFI (Nov. 17, 2023), see Ex. 7.
73
The European Commission’s first RFI under the DSA asked questions about TikTok’s global
content moderation rules related to the October 7 attacks.
In March 2024, the Commission issued TikTok an RFI on AI-generated content, once
again asking questions about—and thereby implying that changes were required in—its global
content moderation rules.283 This time, the Commission more specifically targeted TikTok’s
content moderation policies, asking for information about “TikTok’s internal guidelines,
policies, and practices” and “content moderation actions” for AI-generated content.284 The
Commission also asked TikTok for information about “policies or procedures in place to address
issues that may arise related to the viral dissemination of Generative AI content” and how
TikTok prevents AI systems from “resembl[ing] existing persons, objects, places, entities, [and]
283 Commission RFI to TikTok (Mar. 14, 2024), see Ex. 291.
284 Id.
74
events.”285 The RFI once again shows the Commission’s abiding focus on platforms’ global
content moderation rules. It also indicates that the Commission treats AI-generated content as an
ipso facto threat rather than having the potential to help individuals express themselves more
fully. Indeed, the Commission wrote that Generative AI content might pose a systemic risk under
the DSA—thereby imposing a censorship obligation on platforms—because of its potential
“negative effects” on “human dignity.”286 This standard is completely malleable and offers little
clarity, potentially leading to over-removal of lawful content. In response to this RFI, TikTok
noted to the Commission that it planned to introduce “large scale moderation” of AI-generated
content using automated systems and demote AI content deemed “potentially misleading”287
A March 2024 RFI implied that TikTok needed to do more to censor AI-generated content.
Finally, in October 2024, the European Commission issued an RFI inquiring how
TikTok’s content moderation and recommender systems affect “electoral processes and civic
discourse.”288 The Commission demanded documents “analyzing and assessing whether and how
all of TikTok’s recommender systems and other relevant algorithmic systems influence risks to
electoral processes, civic discourse and public security, [and] illegal hate speech,” among other
topics.289 Once again, this represented an attempt by the Commission to pressure TikTok into
changing its content moderation policies. Moreover, given that platforms’ policies are global in
scope, the European Commission’s pressure regarding election-related speech policies may have
an impact in elections outside of the European Union.290
285 Id.
286 Id.
287 Letter from TikTok to European Commission (Apr. 5, 2024), see Ex. 9.
288 Commission RFI to TikTok (Oct. 2, 2024), see Ex. 296; see TikTok Response to Commission RFI (Oct. 2, 2024),
Ex. 295.
289 Commission RFI to TikTok (Oct. 2, 2024), see Ex. 296.
290 See infra Sec. V.B.
75
iii. The European Commission’s “best practices” for DSA compliance urge
platforms to change their content moderation rules.
In May 2025, the European Commission hosted a “DSA Systemic Risk Assessment
Workshop” with regulators, platforms, and NGOs.291 At the workshop, the Commission
presented “best practices” for compliance with the DSA. These best practices included
“continuous review of community guidelines,”292 specifically invoking the term that many
platforms use for their overarching set of global content moderation rules.293 Here, the European
Commission stated clearly that platforms were expected to review and change their global rules
to censor content in accordance with European regulators’ demands in order to avoid massive
fines under the DSA.
The European Commission’s “best practices” for DSA compliance includes “continuous”
changes to global content moderation rules.
During the same event, the European Commission also stated that “illegal content is a
symptom” and that “a broader risk approach is needed.”294 From the European Commission’s
perspective, targeting only illegal content was not enough—platforms should censor broad
swaths of disfavored content, which would be wide enough to scope in and stop the “symptom”
of illegal content.295 In other words, the European Commission expected platforms to make
global content moderation rule changes censoring content that would be legal in the EU—and,
for that matter, content that also would be protected by the First Amendment in the United
States.
291 See DSA Censorship Report I, supra note 3.
292 European Commission – DSA Systemic Risk Assessment Workshop Readout (May 7, 2025), see Ex. 206.
293 See, e.g., YouTube’s Community Guidelines, Y OUT UBE H ELP,
https://support.google.com/youtube/answer/9288567?hl=en (last visited Jan. 29, 2026); Community Guidelines,
T IK T OK , https://www.tiktok.com/community-guidelines/en (last visited Jan. 29, 2026).
294 European Commission – DSA Systemic Risk Assessment Workshop Readout (May 7, 2025), see Ex. 206.
295 Id.
76
H. Platforms changed their global content moderation rules in response to these
European Commission efforts.
In a concerted, decade-long campaign, the European Commission repeatedly pressured
non-EU social media platforms to change their global content moderation rules. The European
Commission’s pressure was successful. Internal documents show that TikTok made specific
changes to its global content moderation rules “to achieve compliance with the Digital Services
Act.”296 A European law compelled TikTok to change its policies and globally censor
“marginalizing speech” and true information that was “presented out of context.”297 The
European Commission’s decade-long censorship campaign worked. Platforms are censoring true
information globally, including in the United States, in order to comply with the DSA.
i. TikTok introduced global censorship requirements for true information in
order to comply with the DSA.
TikTok’s overarching set of global content moderation rules are known as Community
Guidelines.298 TikTok’s Community Guidelines “apply to [the] global community and
everything shared on TikTok,” no matter the location.299 Internal platform documents show that
sustained Commission pressure led TikTok to change these global content moderation policies.
In March 2023, just a few months before the DSA became enforceable, TikTok “carried out [the]
most comprehensive updates to [its] Community Guidelines . . . to date.”300 Later in 2023,
shortly before the DSA became enforceable, TikTok initiated another “round of . . . updates” to
its Community Guidelines.301 This time, TikTok was explicit: the “primary motivation” for these
changes was “to achieve compliance with the Digital Services Act.”302 These new global
censorship rules adopted to comply with the DSA targeted true information and political speech,
including “coded statements” that “normalize inequitable treatment” and “media presented out of
context.”303 To put it plainly, an EU law caused one of the world’s largest social media platforms
to censor true information in the United States and around the world.
As early as September 2021, TikTok appeared to be starting the process of rewriting its
Community Guidelines to comply with European censorship demands. In a meeting with top
staff to Commissioner Thierry Breton, TikTok noted that “they [were] currently conducting a
major revision of their own code,” appearing to refer to TikTok’s Community Guidelines. 304
296 TikTok Community Guidelines Survey, see Ex. 15.
297 TikTok Community Guidelines Update Executive Summary (Mar. 20, 2024), see Ex. 8.
298 Community Guidelines, T IK T OK , https://www.tiktok.com/community-guidelines/en (last accessed Jan. 29, 2026).
299 Community Guidelines, T IK T OK , https://www.tiktok.com/support/faq_detail?id=7543604781873371654 (last
accessed Jan. 29, 2026).
300 Code of Practice on Disinformation – Report of TikTok for the period 1 January 2023 – 30 June 2023, T IK T OK
(July 2023) at 41.
301 TikTok Community Guidelines Survey, see Ex. 15.
302 Id.
303 TikTok Community Guidelines Update Executive Summary (Mar. 20, 2024), see Ex. 8.
304 Readout of meeting between TikTok staff and Cabinet of Commissioner Theirry Breton (Sep. 30, 2021), see Ex.
20.
77
TikTok appeared to brief the European Commission on forthcoming changes to its Community
Guidelines as early as September 2021.
In 2022, while this re-write was underway, TikTok took preliminary steps to ramp up its
censorship operation. It “invest[ed] in . . . machine learning models” to “detect[] and remov[e]
misinformation” and “built a repository of previously fact-checked claims to help [its]
specialized misinformation moderators.”305 Importantly, TikTok reported this change to the
Commission in a Disinformation Code Transparency Report, indicating that the moves were at
least in part designed to comply with the European Commission’s demands.306
Then, in March 2023—six months after the DSA’s passage and five months before its
enforcement—TikTok overhauled its Community Guidelines, clamping down on alleged
“misinformation” related to “climate change” and “electoral processes.”307 TikTok would now
censor “climate change misinformation that undermines well-established scientific consensus,
such as denying the existence of climate change or the factors that contribute to it”308—despite
the fact that climate change was, and remains, an important topic of political and scientific
debate around the world. Similarly, TikTok’s “civic and election integrity” policy—titled with
language mirroring the DSA—was changed to bar alleged misinformation about “the final . . .
outcome of an election.”309 Of course, discussion about election outcomes—particularly before
an official winner has been declared—is quintessential political speech at the heart of the First
305 Code of Practice on Disinformation – Report of TikTok for the period 16 June – 16 December 2022, T IK T OK
(Jan. 2023) at 79, 119.
306 Id.
307 Integrity and Authenticity, T IK T OK (updated Mar. 2023),
https://web.archive.org/web/20230523022625/https://www.tiktok.com/web/20230523022625/https://www.tiktok.co
m/community-guidelines/en/integrity-authenticity/.
308 Id.
309 Id.
78
Amendment to the U.S. Constitution.310 Finally, TikTok’s updated Community Guidelines
continued to classify “misgendering” as a form of hate speech to be censored, likely stifling
political debate about issues surrounding transgenderism.311
TikTok deliberately highlighted these changes to the European Commission. In July
2023, TikTok made a lengthy presentation to the Commission about its efforts to comply with
the DSA, called a “readiness overview.”312 Touting its “readiness,” TikTok noted that it had
recently “refreshed” its global Community Guidelines ahead of the DSA’s implementation.313
TikTok also once again reported these changes to the European Commission in its
Disinformation Code Transparency Report.314 Together, these facts indicate at minimum that
TikTok’s new Community Guidelines were drafted with one eye towards the Commission’s
expectations. And given their timing, they likely were an effort to comply with the DSA.
TikTok touted its “refreshed Community Guidelines” as it tried to convince the European
Commission that it was in compliance with the DSA.
In fact, the DSA’s effect on TikTok’s content moderation rules went deeper than the
Community Guidelines. In the same July 2023 meeting, TikTok told the European Commission
310 See Mills v. Alabama, 384 U.S. 214, 218-219 (1966) (“Whatever differences may exist about interpretations of
the First Amendment, there is practically universal agreement that a major purpose of that Amendment was to
protect the free discussion of governmental affairs. This, of course, includes . . . all such matters relating to political
processes.”).
311 Safety and Civility, T IK T OK (updated Mar. 2023),
https://web.archive.org/web/20230422003341/https://www.tiktok.com/community-guidelines/en/safety-civility/.
312 TikTok Slide Deck: Digital Services Act, Readiness overview for the European Commission (July 17, 2023), see
Ex. 3.
313 Id.
314 Code of Practice on Disinformation – Report of TikTok for the period 1 January 2023 – 30 June 2023, T IK T OK
(July 2023).
79
that “units with day-to-day activities overlapping the DSA, like Trust & Safety . . . [were] given
new policies, rules, & [standard operating procedures]” to comply with the DSA.315 The
Community Guidelines were just the tip of the iceberg. In fact, the DSA’s censorship mandates
required a sweeping overhaul of TikTok’s internal content moderation policies and practices
from top to bottom.
TikTok noted during its DSA “readiness assessment” that it had new, DSA-compliant internal
guidance documents for content moderation teams.
TikTok continued to make additional changes. The same month, in July 2023, TikTok
initiated another round of changes to its Community Guidelines. Internal documents show that
“the primary motivation” for this “round of CG updates [was] to achieve compliance with the
Digital Services Act.”316 The EU’s pressure campaign had worked: TikTok explicitly instituted
new global censorship rules in order to comply with the DSA.
315 TikTok Slide Deck: Digital Services Act, Readiness overview for the European Commission (July 17, 2023), see
Ex. 3.
316 TikTok Community Guidelines Survey, see Ex. 15.
80
TikTok made changes to its global Community Guidelines in order to comply with the DSA.
81
TikTok’s new, EU-mandated censorship rules, which took effect in 2024, targeted true
information and conventional political speech. First, TikTok made “marginalizing speech,”
including “coded statements” that “normalize inequitable treatment” ineligible for the For You
Feed—the main page on TikTok where users discover content.317 This almost certainly captures
political speech, and likely captures humor and satire related to political topics—which the
European Commission regularly targets.318 TikTok also instituted censorship policies for new
categories of so-called “misinformation”—distorting the term to capture true content. In response
to EU pressure, TikTok instituted a policy to censor “misinformation that undermines public
trust,” “media presented out of context” and “misrepresent[ed] authoritative information.”319
There is simply no way to enforce these rules fairly. “Public trust” is an amorphous concept that
is impossible to empirically define.320 Decisions about what “context” must be included
alongside certain information to avoid censorship are entirely subjective. And science itself
consists of debate about the truth and significance of findings within scientific literature.321
In response to Commission pressure, TikTok introduced new policies to censor true information.
317 TikTok Community Guidelines Update Executive Summary (Mar. 20, 2024), see Ex. 8.
318 DSA Censorship Report I, supra note 3, at 28.
319 TikTok Community Guidelines Update Executive Summary (Mar. 20, 2024), see Ex. 8.
320 For example, the political left might argue that certain claims about election integrity “undermine public trust” in
elections. Conversely, the political right might argue that overblown claims about the efficacy of masks and
COVID-19 vaccines “undermine[d] public trust” in public health authorities. These claims are value-laden and
inherently political, meaning they deserve the broadest possible free speech protections.
321 See, e.g., H. Holden Thorp, Public debate is good for science, 371 SCIENCE 213 (2021) (“If we want the public to
understand that science is an honorably self-correcting process, let’s do away once and for all with the idea that
science is a fixed set of facts in a textbook. Instead, let everyone see the noisy, messy deliberations that advance
science and lead to decisions that benefit us all.”
82
TikTok knew, after years of engagement with the European Commission, that the
Commission sought censorship of conservative political speech. And after years of censorship
pressure, and working under vague rules, TikTok changed its global content moderation rules to
censor political speech and true information worldwide. It was a major leap forward in the
European campaign for global narrative control. The DSA, within months of its enforcement
date, successfully forced the world’s largest social media platforms to censor conservative
political speech and true information in the United States and around the world.
ii. Bing may have changed its globally applicable terms of service to comply
with the DSA.
Bing, owned by Microsoft, is the world’s second most popular search engine.322 Like
TikTok’s Community Guidelines, Bing’s “terms and conditions” also “apply globally.”323 Bing’s
2025 DSA Risk Assessment, a report required under Article 34 of the DSA and produced to the
Commission, stated to the Commission that Bing contemplated changes to its “content
moderation systems” and “terms and conditions and their enforcement” as potential DSA
“mitigations.”324 Bing considered changing these items to comply with the DSA’s mandates
regarding “whether, how much, and what types of content are available to users on the
service,”325 undermining the Commission’s claim that the DSA is content-agnostic.326
Bing considered changing its global content moderation rules to censor certain types of content
in order to comply with the DSA.
322 Market share of leading desktop search engines worldwide from January 2015 to March 2025, STATISTA (last
accessed Jan. 29, 2026), https://www.statista.com/statistics/216573/worldwide-market-share-of-search-
engines/?srsltid=AfmBOoqCBE3XL5hEEcXmaOqakFgKEBgj9pp-ZXRlwb3IjOXskyBP0sBm.
323 Bing Systemic Risk Assessment (Aug. 2025), see Ex. 298.
324 Id.
325 Id.
326 See Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Thierry Breton, Comm’r for
Internal Market, European Comm’n (Sept. 10, 2024); Letter from Thierry Breton, Comm’r for Internal Market,
European Comm’n, to Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary (Aug. 21, 2024).
83
Bing also confirmed to the European Commission that it “continued to scale its
investment in systemic risk detection through enhanced classifier development and risk
mapping.”327 “Classifiers” are certain words or phrases that cause a “defensive search
intervention,” routing you away from certain content.328 In plain English: Bing reported to the
Commission that it expanded the list of words and phrases where it directs searchers not to the
content searchers want, but to the content Bing—and the European Commission—think they
need.
iii. YouTube pointed to policies censoring content related to firearms and
election integrity as evidence of its compliance with the Disinformation
Code, and confirmed that the DSA threatens free speech within and
outside of the European Union.
In addition to the wide array of mandatory reports under the DSA, signatories to the
Disinformation Code are required to make additional reports. These are essential to the European
Commission’s censorship goals: by requiring platforms to report their censorship, the
Commission can identify and target platforms that do not censor according to European wishes.
In 2022, YouTube introduced new restrictions on content contesting the outcome of
“certain national elections.”329 In 2024, YouTube introduced a new policy censoring content
related to firearms.330 In both cases, YouTube reported to the European Commission that it made
these policy changes “in line” with its “commitment[s]” under the Disinformation Code,
indicating that European censorship initiatives were a driving factor.331
In September 2025, Alphabet belatedly admitted that the Biden-Harris Administration in
the United States pressured it to censor similar content on YouTube and agreed to re-institute the
accounts of creators who had been banned under its previous censorship policies.332 In the same
statement to the Committee, Alphabet shared its concerns about “the risk that the DSA may pose
to freedom of expression within and outside of the European Union.”333
327 Bing Systemic Risk Assessment (Aug. 2025), see Ex. 298.
328 Id.
329 Code of Practice on Disinformation – Report of Google for the period 1 July 2022 – 30 September 2022,
GOOGLE (Oct. 2022) at 54.
330 Code of Practice on Disinformation – Report of Google for the period 1 January 2024 to 30 June 2024, G OOGLE
(July 2024) at 69.
331 Code of Practice on Disinformation – Report of Google for the period 1 July 2022 – 30 September 2022,
GOOGLE (Oct. 2022) at 53-54; Code of Practice on Disinformation – Report of Google for the period 1 January
2024 to 30 June 2024, G OOGLE (July 2024) at 68-69.
332 Letter from Mr. Daniel Donovan, Counsel for Alphabet, to Rep. Jim Jordan, Chairman, H. Comm. on the
Judiciary (Sep. 23, 2025).
333 Id.
84
In a September 2025 letter to the Committee, Alphabet confirmed that the DSA may pose a risk
to the freedom of expression “within and outside of the European Union.”
In an aggressive multi-year campaign, the European Commission successfully pressured
the world’s largest social media platforms to change their global content moderation rules,
ushering in a new era of global censorship. The plain text of the Disinformation Code, the Hate
Speech Code, and the DSA—each of which bind major social media platforms—make clear that
platforms should change their global content moderation rules to comply. In meetings with
platforms, the European Commission used the Codes—and, critically, the knowledge of the
coming DSA—to pressure platforms to more aggressively censor content related to the COVID-
19 pandemic and the Russian invasion of Ukraine. This pressure campaign went to the top,
including senior European Commission officials such as European Commission Vice President
Vera Jourova and Commissioner Thierry Breton.
Once the DSA became enforceable, the European Commission’s global censorship
efforts were strengthened significantly. The European Commission explicitly told platforms to
change global content moderation rules to comply with the DSA, and formal requests for
information focused on platform content moderation rules. Ultimately, the pressure campaign
worked: major social media platforms, such as TikTok, changed their global content moderation
rules to censor true information to align with the European Commission’s demands.
85
IV. T HE E UROPEAN C OMMISSION’S ATTEMPTS TO C ENSOR U.S. S PEECH.
The European Commission’s direct influence on platforms’ global content moderation
rules affected American speech in the United States. But that is not the only way European
regulation harmed American speech. On multiple occasions, the European Commission directly
asked platforms about their plans to censor U.S. content—including content related to the
COVID-19 pandemic and the 2024 U.S. presidential election. The Commission used the DSA to
target U.S. speech, indirectly threatening retaliation if platforms did not censor U.S. content in
accordance with European regulators’ expectations.
A. The European Commission asked platforms to censor U.S. content related to the
COVID-19 pandemic.
Like much of the European Commission’s censorship, the campaign against U.S.-based
content began during the COVID-19 pandemic. In November 2021, DG-Connect—the
department within the European Commission responsible for enforcing the DSA—asked TikTok
how it planned to “fight disinformation about the covid 19 vaccination campaign for children
starting in the US.”334 The European Commission specifically asked TikTok about its plans to
“remove” certain “claims” about the efficacy of the COVID-19 vaccine in children, approvingly
referencing Meta’s then-recent decision to censor debate about this topic on its platforms.335 The
implication was clear: the Commission expected TikTok to censor U.S. debate about the efficacy
and prudence of giving COVID-19 vaccines to young children.
European regulators urged TikTok to censor U.S. claims about COVID-19 vaccines for children.
334 Emails between TikTok staff and European Commission staff (Nov. 5, 2021), see Ex. 58.
335 Id.
86
At the time of the European Commission’s outreach, this critical scientific and policy
debate was ongoing—TikTok noted in response that “the EU seems to still be developing its
specific policy around vaccines/children.”336 Nevertheless, TikTok responded that debate about
COVID-19 vaccines for children was already censored under its existing global content
moderation rules.337 The European Commission responded favorably, calling TikTok’s
censorship “fantastic.”338
One year later, in November 2022—just one month after the DSA’s passage—European
Commission regulators again pressured platforms to remove U.S.-based content about COVID-
19 vaccines. This time, fact-checkers empowered under the DSA asked platforms to remove an
American documentary film about vaccines.339 Platforms ignored the request at first, and then
the European Commission intervened.340 It called censorship of the film “vital,” directing
platforms to “check the matter[] . . . internally” and provide a response “in writing,” noting
specifically that the film remained available on YouTube, Twitter, and TikTok.341 YouTube
responded, stating that it had “removed” the content “for violations of our policies”342—the very
same policies that the European Commission had been pressing platforms to change. This
sequence of events is deeply troubling. In response to pressure from European regulators,
YouTube—an American company—removed an American documentary from the platform
worldwide. It is an example of how European censorship pressure can result in the global
removal of U.S.-based speech.
B. The European Commission interfered in the U.S. political process by engaging with
platforms about content related to the 2024 U.S. presidential election.
More recently, the European Commission attempted to interfere with the 2024 U.S.
presidential election, pressing platforms on their approach to moderating U.S.-based content
ahead of Election Day in the United States. European Commission Vice President Vera Jourova
implied that platforms should censor U.S.-based speech to comply with the DSA in a May 2024
meeting with TikTok, while Commissioner Thierry Breton said the quiet part out loud in an
August 2024 letter to Elon Musk, the owner of X.
In May 2024, European Commission Vice President for Values and Transparency Vera
Jourova went on a tour of the American west coast, meeting with American social media
platforms in California.343 Ahead of a meeting with TikTok CEO Shou Chew and Head of Trust
and Safety Adam Presser, Jourova’s staff told TikTok that she was “interested” in discussing
“US election preparations” with them.344 Needless to say, social media companies’ U.S. content
336 Id.
337 Id.
338 Id.
339 Emails between European Commission staff and Code of Practice on Disinformation Signatories (Dec. 8, 2022),
see Ex. 96.
340 Id.
341 Id.
342 Id.
343 Vera Jourova (@VeraJourova), X (May 29, 2024, 3:30 AM),
https://x.com/VeraJourova/status/1795719387678380478.
344 Emails between TikTok staff and European Commission staff (May 28, 2024), see Ex. 27.
87
moderation ahead of a U.S. election is none of the European Commission’s business. And the
European Commission’s track record suggests what their goal was: they wanted additional
censorship of U.S.-based content favorable to President Trump.345
European Commission Vice President Vera Jourova asked to discuss “US election
preparations” with TikTok ahead of the 2024 U.S. presidential election.
A few months later, in August 2024, Commissioner for Internal Market Thierry Breton
again meddled in U.S. politics—this time, much more publicly. In a letter to owner Elon Musk,
Breton threatened X with regulatory retaliation under the DSA for hosting a live interview with
345 See Letter from Mr. Thierry Breton, Comm’r for Internal Market, European Comm’n, to Mr. Elon Musk, Owner,
X Corp. (Aug. 12, 2024).
88
President Trump in the United States.346 Just a few hours before the interview, Breton warned X
that “spillovers” of U.S. speech into the EU could spur the Commission to adopt retaliatory
“measures” against X under the DSA.347 Breton warned that he would be “extremely vigilant to
any evidence” that President Trump’s interview spilled over into the EU and informed Musk that
the Commission “[would] not hesitate to make full use of [its] toolbox” to silence this core
American political speech.348
The global campaign to censor speech in an extraterritorial manner has escalated since
then. For example, in September 2025, Graham Linehan, an award-winning comedy writer,
advocate for protection of women-only spaces, and Irish citizen, was arrested by armed police
upon his arrival at London’s Heathrow Airport from the United States for three tweets he posted
several months prior.349 Linehan’s possessions were “confiscated” and he was transported to a
prison cell before being released on a single bail condition: he could no longer post on X.350 It
was a clear attempt by British authorities to silence a political opponent who resides in the
United States.
In October 2025, British authorities declined to press charges against Linehan for the
posts.351 The London Metropolitan Police Chief stated that Britain’s aggressive online
censorship and hate speech laws “left officers ‘in an impossible position . . . policing toxic
culture war debates,’” all but admitting that the arrest was political in nature.352
For several years, the European Commission has caused censorship of U.S. content by
pressuring social media platforms to change their global content moderation policies. But it did
not stop there: the evidence shows that on multiple occasions, European Commission leadership
has directly pressured platforms to censor specific American content on topics including
COVID-19 and the 2024 U.S. presidential election.
This is an unprecedented incursion on American speech and political discourse by a
foreign actor, and it is entirely unacceptable. The facts point to one conclusion: the European
Commission is trying to make an end-run around the First Amendment and censor U.S. speech
that does not align with its preferred narratives.
346 Id.
347 Id.
348 Id.
349 Graham Linehan, I just got arrested again, T HE G LINNER U PDATE (Sep. 2, 2025); Helen Bushby, Graham
Linehan arrested at Heathrow over his X posts, BBC (Sep. 2, 2025).
350 Graham Linehan, I just got arrested again, T HE G LINNER U PDATE (Sep. 2, 2025).
351 Brian Melley, TV writer Graham Linehan won’t face charges for transgender post that sparked UK debate, AP
(Oct. 20, 2025).
352 Id.
89
V. T HE E UROPEAN C OMMISSION WEAPONIZES ITS CENSORSHIP T OOLS TO S ILENCE
CONSERVATIVE AND “A NTI -E STABLISHMENT ” POLITICAL S PEECH.
The DSA requires platforms to assess and mitigate the risk to “civic discourse and
electoral processes” posed by certain content.353 After all, it is during periods of intense public
debate and spirited political discourse that narrative control is most important to the ruling class.
Documents produced to the Committee confirm this to be true. The EU Internet Forum—a
separate regulatory initiative led by the Directorate-General for Migration and Home Affairs
(DG-Home) since 2015—urged platforms to silence conventional conservative discourse by
equating it with Nazi propaganda. Since the DSA came into effect, the pressure has been more
direct. Minutes from meetings between the European Commission and platforms make clear that
platforms must follow the Commission’s Election Guidelines, which include a requirement that
platforms “establish[] measures to reduce the prominence of disinformation” before elections,354
to comply with the DSA. Documents also confirm that the European Commission and EU
Member State national regulators engage with platforms before nearly every major European
election to solicit information on how each platform is censoring so-called misinformation—at
least nine times since 2023. Internal platform documents indicate that EU pressure may have led
platforms to censor content communicating conservative views on transgenderism and mass
migration—all while these topics were at the heart of national electoral debates.
A. The EU Internet Forum encourages platforms to censor legal and non-violative
political speech.
The EU Internet Forum (EUIF) was created by DG-Home in 2015 to “address[] the
misuse of the internet for terrorist purposes.”355 Since 2015, however, the EUIF has morphed
from a targeted initiative to stop online terrorist recruitment to a broad effort encouraging
platforms to censor legal and non-violative political speech. EUIF now advises platforms on how
to best censor “borderline” content, which the EU defines as “content that is not illegal but may
be harmful and exploited by extremist actors.”356 In practice, this focus on “borderline content”
is an obsession with alleged “violent right-wing extremis[m],” defined to include broad swaths of
conventional conservative opinion.357 The EUIF took these actions despite its own study finding
that “Violent Left-Wing TVE [terrorist and violent extremist] content had a significantly higher
Findability score” than other types of TVE content, including right-wing TVE content.358
353 Digital Services Act, supra note 26, Art. 34.
354 DSA Election Guidelines, supra note 45, Art. 3.2.1.d.ii.
355 European Union Internet Forum, E UROPEAN COMM ’ N (July 25, 2025), https://home-
affairs.ec.europa.eu/networks/european-union-internet-forum_en.
356 Radicalization Awareness Network, Malign Use of Algorithmic Amplification of Terrorist and Violent Extremist
Content: Risks and Countermeasures in Place (2021) (emphasis added), see Ex. 286.
357 See EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic Amplification to Spread
Terrorist, Violent Extremist and Borderline Content, Ex. 37, 40.
358 EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic Amplification to Spread Terrorist,
Violent Extremist and Borderline Content, see Ex. 37; see also TrustLab Slide Deck: Study on the Role and Effects
of the Use of Algorithmic Amplification to spread Terrorist, Violent Extremist and Borderline Content across
leading Social Media sites in Europe, Ex. 39.
90
i. The EUIF classifies political discourse on topics like immigration and
gender as “violent right-wing extremism.”
The EUIF systematically misclassifies legitimate political discourse as “violent right-
wing extremism” to pressure platforms into removing this content. At the heart of this operation
is the EUIF “Handbook of Borderline Content”—a 72-page guide for how social media
platforms should censor non-violative political speech.359 Categories of potential borderline
content to be censored under the handbook include:
- “Populist rhetoric”;360
- “Anti-government/anti-EU”
content;361 - “Anti-elite” content;362
- “Political satire”;363
- “Anti-migrants and
Islamophobic content”;364 - “Anti-refugee/immigrant
sentiment”;365 - “Anti-LGBTIQ . . .
content”;366 and - “Meme subculture.”367
These issues represent the dominant topics of European—indeed, global—political life today.
Yet the Commission’s handbook described as a “best practice” banning “dehumanizing” speech
related to these topics and content that otherwise advances “harmful stereotypes.”368 Moreover,
the Commission’s handbook contained platform “policy recommendations” from the biased, left-
wing pseudoscience group Center for Countering Digital Hate (CCDH).369 While some of the
359 EU Internet Forum: The Handbook of Borderline Content in Relation to Violent Extremism, see Ex. 38.
360 Id.
361 Id.
362 Id.
363 Id.
364 Id.
365 Id.
366 Id.
367 Id.
368 Id.
369 Id.; see Rep. Jim Jordan (@Jim_Jordan), X (Sep. 5, 2023, 6:17 PM),
https://x.com/Jim_Jordan/status/1699184930331267539 (detailing CCDH’s partnership with the Biden White House
91
language cited in the handbook as “borderline” content to be censored is objectionable, that is
beside the point: it is never acceptable for government to put its thumb on the scale to censor
disfavored categories of lawful speech.
The EUIF regularly targeted legitimate political expression. In a 2023 report, EUIF
detailed what it considered “violent right-wing extremism” necessitating online censorship. This
included the notion that “refugees do not belong”370—despite repeated incidents of violence by
so-called refugees who arrived in Europe under permissive mass migration policies.371 Yet, the
EU called this statement a “white supremacy idea.”372 Similarly, the concept that European
nations should change their asylum laws to stop the flow of unvetted migrants was described as
“borderline anti-immigrant and anti-refugee rhetoric.”373 Opposition to mass migration is a
legitimate opinion about an important public policy topic. Censorship of this speech is
incompatible with any robust conception of free expression.
The EUIF used this post as an example of “borderline” content that platforms should censor.
to censor disfavored content); Paul D. Thacker and Matt Taibbi, Election Exclusive: British Advisors to Kamala
Harris Hope to “Kill Musk’s Twitter”, T HE D ISINFORMATION CHRONICLE (Oct. 22, 2024); Transcribed Interview of
Imran Ahmed, CEO, Center for Countering Digital Hate, H. Comm. on the Judiciary (June 25, 2024).
370 EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic Amplification to Spread Terrorist,
Violent Extremist and Borderline Content, see Ex. 37, 40.
371 See, e.g., Newcastle grooming gang jailed for raping 13-year-old girl, BBC (Mar. 1, 2024).
372 EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic Amplification to Spread Terrorist,
Violent Extremist and Borderline Content, see Ex. 37, 40.
373 EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic Amplification to Spread Terrorist,
Violent Extremist and Borderline Content, see Ex. 40.
92
The EUIF even encouraged censorship of political speech across borders—including that
of U.S. conservatives. In a March 2022 report on “Violent Extremism and Terrorism Online in
2021,” the EUIF listed U.S. Representative Paul Gosar (R-AZ), former U.S. Representative
Marjorie Taylor Greene (R-GA), and former White House official Steve Bannon as “far-right”
figures, placing them alongside white supremacists and implying that they should be censored.374
Indeed, this EU “terrorism” report lumped together Islamic jihadists, white supremacists, and
U.S. Republicans as the world’s greatest internet threats.375
The report also denigrated new social media platforms that emerged in the wake of Big
Tech’s decision to systematically censor Republicans in 2020 and 2021.376 The EUIF discussed
the U.S. social media platform Truth Social in this “terrorism” report because Truth Social
“encourages open, free, and honest global conversation”—something the EU says will “attract[]
a variety of types of right-wing extremist users” and make Truth Social a “right-wing oriented
social media platform.”377 This report—and specifically its treatment of Truth Social—offers
critical insight about how the European Commission views the “risks” that must be
“mitigate[ed]” under the DSA.378 In the European Commission’s view, a platform that embraces
free speech is ipso facto a hotbed for online terrorism and in violation of the DSA.
The European Commission denigrated Truth Social, writing that it was likely to be a hotbed for
extremism and terrorism because it “encourages open, free, and honest global conversation.”
Later in 2022, EUIF held a “Workshop on Algorithmic Amplification and Borderline
Content,” discussing how platforms should censor legal borderline content. The “main
takeaways” from this meeting address the European Commission’s definition of and censorship
expectations relating to borderline content.379 Borderline content, according to EUIF, consists of
“a combination of disinformation/conspiracy theories and hate speech,” specifically mentioning
“anti-establishment/anti-institutions,” “anti-trans,” “anti-migrants,” and “anti-COVID measures”
speech as major categories.380 EUIF stressed that “positive interventions”—meaning active
374 RAN Policy Support, Violent Extremism and Terrorism Online in 2021: The Year in Review (2021), see Ex. 287.
375 Id.
376 See STAFF OF THE SELECT SUBCOMM. ON THE WEAPONIZATION OF THE FED. G OV’ T OF THE H. COMM . ON THE
JUDICIARY, 118 TH CONG., T HE WEAPONIZATION OF THE FEDERAL G OVERNMENT (Comm. Print Dec. 20, 2024).
377 RAN Policy Support, Violent Extremism and Terrorism Online in 2021: The Year in Review (2021), see Ex. 287.
378 Digital Services Act, supra note 26, Art. 34-35.
379 Email from European Commission staff to EU Internet Forum participants (Oct. 11, 2022), see Ex. 32.
380 Id.
93
censorship measures—“need to be adopted in order to prevent the spread of” this lawful, non-
violative political content, specifically calling for “downranking” and “demoneti[zation]” of it.381
During the meeting, Google and YouTube shared that they were “working on definitions
and thresholds regarding the use of borderline content to ensure de-ranking by its internal
algorithmic systems,” in line with the Commission’s demands.382 Just one month before the
DSA’s passage, platforms were attuned to the European Commission’s messaging on what steps
they needed to take under the DSA. This workshop sent a clear message.
The European Commission directed platforms to censor legal, non-violative content during a
2022 workshop.
This September 2022 EUIF workshop also featured presentations by misinformation
pseudoscientists and national regulators. A presentation by the Institute for Strategic Dialogue—
a think tank funded during the Biden-Harris Administration by the U.S. Department of
Homeland Security and the State Department383—warned that platforms may need to censor
additional mainstream conservative content, stating that “engaging with content featuring Jordan
Peterson and Ben Shapiro served as a gateway into recommendations for a slew of anti-feminist,
381 Id. (emphasis omitted).
382 Id.
383 Partnerships and Funders, I NSTITUTE FOR STRATEGIC D IALOGUE , https://www.isdglobal.org/partnerships-and-
funders/ (last accessed Jan. 29, 2026).
94
misogynistic, and Manosphere content.”384 It is a characteristic ploy for misinformation
pseudoscientists: by lumping conventional conservative opinion with allegedly hateful content,
they coerce platforms to censor the whole lot. To summarize: the Biden-Harris Administration
funded an NGO that was then enlisted by the European Commission to coerce U.S. social media
platforms to censor two of the most popular conservative media figures in the United States,
including an American author.
The Institute for Strategic Dialogue argued that platforms should be wary of users who engage
with Jordan Peterson and Ben Shapiro.
ii. EUIF pressures platforms to change their content moderation rules to
censor this content.
Similar to DG-Connect, DG-Home’s EU Internet Forum targets legal, non-violative
“borderline” content, primarily on political topics including migration and transgenderism. In
addition to targeted censorship pressure, the EUIF—like every EU censorship initiative—
encourages platforms to change their globally applicable content moderation rules. As early as
2021, the Commission invited Meta present to an EUIF meeting on its “initiatives to address
borderline content,”385 indicating to other platforms present that they should take similar steps.
These initiatives likely involved “downranking” and “demoneti[zation]” of borderline content,
which the Commission explicitly told platforms were necessary.386
A year later, in December 2022, another EUIF meeting included a structured discussion
of how platforms were “addressing borderline content.”387 Characteristically, the meeting
readout indicates that only “right wing extremist” content was discussed388—even though
EUIF’s own data found that left-wing extremism was more prevalent on social media.389 The
meeting was part of a larger event at which Home Commissioner Ylva Johansson “expressed
concerns for the increased presence of borderline content online and the challenges it brings
384 Institute for Strategic Dialogue Slide Deck: Algorithmic Amplification, Borderline Content and Manipulative
Techniques (Sep. 29, 2022), see Ex. 31.
385 Agenda for EU Internet Forum Ministerial Meeting (Dec. 8, 2021), see Ex. 28.
386 Email from European Commission staff to EU Internet Forum participants (Oct. 11, 2022), see Ex. 32.
387 Agenda for EU Internet Forum Ministerial Meeting (Dec. 7, 2022), see Ex. 33.
388 Email from European Commission staff to EU Internet Forum participants (Jan. 9, 2023), see Ex. 34.
389 EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic Amplification to Spread Terrorist,
Violent Extremist and Borderline Content, see Ex. 37.
95
about when it comes to content moderation.”390 She specifically invoked the DSA, passed less
than two months before the meeting, and warned that “voluntary cooperation” with the
Commission “is crucial and must continue.”391 The subtext of Johansson’s speech was clear: to
comply with the DSA, platforms needed to do more to silence “borderline” content, which is by
definition legal and non-violative of existing content moderation rules. There is only one way to
do so: change content moderation rules to censor more legal speech. Once again, the campaign to
make platforms change their terms of service reached the highest levels of the European
Commission.
Home Commissioner Ylva Johansson pressured platforms to censor more “borderline” content.
These efforts continued after the DSA came into force. In September 2024, DG-Home
asked TikTok to join an EUIF initiative to develop “a set of principles for companies to address
algorithmic amplification, including of borderline content”—in other words, a set of censorship
requirements enforceable on platforms under the DSA.392 In the following weeks, EUIF’s efforts
expanded.
For example, in October 2024, EUIF asked platforms for information about their steps to
prevent the spread of “borderline” content and how they were complying with Hate Speech and
Disinformation Codes.393 The European Commission told platforms that their responses would
inform a censorship blueprint being assembled by the Institute for Strategic Dialogue,394 which
the European Commission could then adopt as its own. Unsurprisingly, the European
Commission’s questions focused heavily on platforms’ implementation of their content
moderation policies.395 Once again, the European Commission was monitoring platforms’
content moderation rules, urging changes to censor more legal content—all while the regulatory
sword of the DSA hung over their head.
390 Email from European Commission staff to EU Internet Forum participants (Jan. 9, 2023), see Ex. 34; see Agenda
for EU Internet Forum Ministerial Meeting (Dec. 7, 2022), Ex. 33.
391 Id.
392 Emails between TikTok staff and European Commission staff (Sep. 2, 2024), see Ex. 35.
393 Emails between TikTok staff and European Commission staff (Oct. 15, 2024), see Ex. 36.
394 Id.
395 TikTok response to EU Internet Forum Questionnaire – Algorithmic Amplification, see Ex. 41.
96
EUIF asked platforms how they had changed their content moderation rules to censor additional
borderline content and encouraged them to censor more.
Since 2021, the EUIF has emerged as another key institution in the European censorship
architecture. The EUIF specifically targets legal, non-violative political speech, lumping in
American and European conservatives with jihadists and white supremacists and telling
platforms to censor them all. It also encourages platforms to change their global content
moderation rules to silence content disfavored by European Commission bureaucrats.
B. The European Commission and EU Member State regulators pressure platforms to
censor conservative and “anti-establishment” political speech during election
periods.
In addition to the EUIF’s programmatic work to censor conventional political speech, the
European Commission takes specific steps to encourage political censorship ahead of major
elections—precisely when free speech is most important. The European Commission’s DSA
Election Guidelines outline mandatory censorship steps for platforms during election periods,
including taking “measures to reduce the prominence of disinformation.”396 And the European
Commission closely monitors platforms’ performance. Ahead of nearly every major European
election since 2023—at least twelve in total—the Commission has met with platforms to
communicate its censorship expectations and evaluate platforms’ compliance. Since the moment
396 DSA Election Guidelines, supra note 45.
97
the DSA came into force, the Commission has worked with EU Member State regulators to
censor conservative and “anti-establishment” political speech during election periods. These
requirements appear to have led platforms to adopt more restrictive censorship rules for their
global content moderation policies, potentially affecting elections outside the EU, including the
2024 U.S. presidential election.
i. The European Commission’s DSA Election Guidelines require political
censorship ahead of European elections, with global effects.
At the core of the Commission’s election interference are its DSA Election Guidelines.
The Guidelines state that before elections, platforms should: - “Updat[e] and refin[e] policies, practices, and algorithms” to comply with EU
censorship demands; - Comply with “best practices” outlined in the Disinformation Code, the Hate
Speech Code, and EUIF documents; - “Establish[] measures to reduce the prominence of disinformation”;
- “Adapt their terms and conditions . . . to significantly decrease the reach and
impact of generative AI content that depicts disinformation or misinformation”; - “Label” posts deemed to be “disinformation” by government-approved, left-wing
fact-checkers; - “Establish[] measures to limit the amplification of deceptive . . . content generated
by AI”; - “Develop[] and apply[] inoculation measures that pre-emptively build resilience
against possible and expected disinformation narratives”;397 and - Take additional steps to stop “gendered disinformation.”398
Perhaps the greatest risk associated with government regulation of speech is the specter
of government using that power to silence opponents and maintain political control. Every single
requirement listed above allows the European Commission to do just that. It is impossible to
define terms like “disinformation” in a politically neutral way—and as the evidence shows, the
European Commission does not. In effect, the DSA Election Guidelines require platforms to
397 U.S. agencies used this tactic before the 2020 presidential election to cast a true story about Biden family
influence peddling as Russian disinformation. As a result, Big Tech censored the story in the weeks preceding the
election. See STAFF OF THE H. COMM . ON THE JUDICIARY AND THE SELECT SUBCOMM. ON THE WEAPONIZATION OF
THE FED. G OV’ T OF THE H. COMM. ON THE JUDICIARY, 118 TH CONG., E LECTION I NTERFERENCE : H OW THE FBI
“PREBUNKED” A TRUE STORY A BOUT THE BIDEN FAMILY’ S CORRUPTION IN A DVANCE OF THE 2020 PRESIDENTIAL
E LECTION (Comm. Print Oct. 30, 2024).
398 DSA Election Guidelines, supra note 45.
98
censor content disfavored by Europe’s ruling class every time voters have the opportunity to
make their voices heard. Worse, these Orwellian and undemocratic censorship edicts likely have
global effects.
On their own terms, the Election Guidelines are not mandatory. But even before they
were released, senior European Commission officials told platforms that they could face sanction
under the DSA for failing to comply with them. In a meeting with platforms in March 2024,
weeks before the Guidelines were officially unveiled, Prabhat Agarwal, the head of the
Commission’s DSA enforcement unit, told platforms that the Guidelines “are the EC’s opinion
on what are good mitigation measures to comply with the DSA.”399 He described the Guidelines
as a floor for DSA compliance, telling platforms that if they deviated from the Guidelines, they
would need to “have alternative measures that are equal or better”400—meaning censorship
measures that were at least as restrictive as the ones outlined in the Guidelines. This explicit
warning only added to the structural pressure platforms already faced: the DSA’s vague language
and massive potential penalties make any potential safe harbor a practical necessity.
The European Commission’s top DSA enforcer told platforms that compliance with the Election
Guidelines was effectively mandatory.
European Commission efforts to censor speech during election periods, including the
Election Guidelines, may have had global effects. In the same March 2024 meeting, Meta told
the European Commission that it has learned best practices from “more than 200 elections
around the world since 2016,” and that “for every election,” Meta evaluates its “standard
mechanisms,” which “include policies, tools, and processes that [it] document[s] in [its] EU
Code of Practice on Disinformation reports”—in other words, the censorship measures Meta
takes to comply with EU demands.401 Meta later noted to the European Commission that it
changed “processes” in response to the Commission’s Election Guidelines, “making additional
adjustments and improvements to [them] in light of the Guidelines and direct discussions
undertaken at EC hosted roundtables.”402 These changes included “mitigation measures to
399 Internal Meta readout of Roundtable on DSA Elections Guidelines (Mar. 1, 2024), see Ex. 243; Internal emails
among Meta staff (Feb. 26, 2024), see Ex. 241.
400 Internal Meta readout of Roundtable on DSA Elections Guidelines (Mar. 1, 2024), see Ex. 243.
401 Id.
402 Email from Meta staff to European Commission staff (July 10, 2024), see Ex. 166.
99
tackle . . . risk areas” including misinformation, in addition to existing measures targeting
“hostile speech.”403 Given the global nature of Meta’s Community Standards, and its self-
described “standard mechanisms” for election, the conclusion is troubling: Meta and other large
social media companies with scope of the DSA may have had to enact changes that affect non-
EU elections, including U.S. elections, because of the EU’s political censorship requirements.
Communications between Meta and the European Commission show that Meta may have made
global changes to its election content moderation policies to comply with EU censorship
demands.
At this March 2024 meeting, Agarwal, the head of the European Commission’s DSA
enforcement unit, also emphasized the DSA’s requirement that platforms work with
misinformation pseudoscientists to create effective censorship regimes. Meta’s internal readout
of the meeting noted the European Commission’s “strong views about the DSA legal
requirement to involve external stakeholders in the evaluation/development of [platform]
403 Id.
100
measures.”404 The Committee’s oversight has previously shown that these so-called researchers
are uniformly left-wing and pro-censorship.405
The European Commission emphasized that platforms must partner with misinformation
pseudoscientists to censor more content.
Finally, Agarwal engaged in a bit of AI alarmism. He warned that platforms must be
ready to censor “a non-watermarked AI-generated image of a president or political leader [that]
circulates during an election.”406 This has echoes of the European Commission’s previous effort
to target “memes” and satire,407 and evinces a key point about the European Commission
regulators who seek to control the global internet: they think their citizens are stupid, unable to
decide for themselves what to believe and what not to.
ii. TikTok censored political speech ahead of the 2023 Slovak election—the
first European national election conducted after the DSA became
enforceable.
A useful case study of the European Commission’s efforts to influence online discourse
ahead of elections is the 2023 parliamentary election in Slovakia, the first European election to
take place after the DSA came into force. Communications between TikTok and the European
Commission, as well as internal TikTok documents, show how the European Commission
pressured the platform to censor legitimate political discourse ahead of the election.
Platform preparation for the September 30, 2023, election was well underway by July
2023, when the European Commission organized a presentation to platforms by the Slovak
digital regulator, the Slovak Council for Media Services.408 Setting the tone for the coming
censorship campaign, the Slovak regulator denigrated its own people, telling platforms that the
“population tends to incline to conspiracy theories and false narratives.”409 The regulator’s
presentation also referenced “requests connected to upcoming elections” it made to Meta,
Google, and TikTok in June 2023.410
404 Internal Meta readout of Roundtable on DSA Elections Guidelines (Mar. 1, 2024), see Ex. 243.
405 See DSA Censorship Report I, supra note 3, at 28-29.
406 Internal Meta readout of Roundtable on DSA Elections Guidelines (Mar. 1, 2024), see Ex. 243.
407 See DSA Censorship Report I, supra note 3, at 28.
408 Slovak Council for Media Services Slide Deck: Snap Elections in Slovakia (July 2023), see Ex. 214.
409 Id.
410 Id.
101
The Slovak Council for Media Services denigrated its own people in a presentation to platforms
ahead of the 2023 election.
The European Commission’s censorship campaign picked up in September 2023 as the
election approached. On September 5, the Commission met with a group of platforms, asking
them “what actions they [were] planning and what measures they [were] putting in place” ahead
of the election.411 The clear implication: platforms had to put censorship “measures” in place
ahead of the election to comply with the DSA, which had become enforceable just one week
before this meeting. The next week, on September 12, the Commission met with TikTok
specifically, asking for “technically detailed explanations on your risk assessment, mitigation
measures, and enforcement capabilities for [its] terms and conditions regarding this election.”412
If TikTok failed to moderate content consistent with the Commission’s expectations, the
Commission would know—and, with the DSA in effect, it would have to power to exact a
massive fine in response.
411 Emails from European Commission staff to members of the Working Group on Elections (Sep. 1, 2023), see Ex.
220.
412 Letter from the European Commission to TikTok (Sep. 5, 2023), see Ex. 221; see also Letter from TikTok to the
European Commission (Sep. 2023), Ex. 218.
102
The European Commission directed TikTok to present information about its pre-election
censorship measures in a September 12 meeting.
103
This pressure campaign worked. TikTok’s internal “Election Moderation Guidelines” for
the 2023 Slovak election indicate that TikTok censored political speech about gender issues.413
Examples of “hate speech” that TikTok censored ahead of the election include: - “There are only two genders”;
- “Children cannot be trans”;
- “We need to stop the sexualization of young people/children”;
- “I think that LGBTI ideology, gender ideology, transgender ideology are a big
threat to Slovakia, just like corruption”; and - “Targeted misgendering.”414
TikTok’s post-election “Summary Report” similarly classified as “degrading speech”
comments on “the absurdity of the existence of more than two genders and the forced inclusive
language,” like the term “pregnant people.”415 Also “degrading” were posts about “the need to
stop gender ideology from schools.”416 The content moderation guide noted that claims like these
“are common in the Slovak political discussions”—unsurprising, as political debates about
transgenderism and related issues have been a major political topic over the last five years.417
Yet, as part of its efforts to comply with the DSA, TikTok censored anodyne claims questioning
left-wing transgender ideology ahead of a major national election.
TikTok classified comments noting “the absurdity” of terms like “pregnant people” as
“degrading speech.”
In the week leading up to the Slovak election, the European Commission shifted its focus
from TikTok’s comprehensive censorship measures to specific accounts. Four days ahead of the
election, the Commission sent TikTok a spreadsheet with lists of “problematic accounts on
Slovak TikTok,” implying that they should be censored.418 The spreadsheet contained at least 63
413 TikTok internal Content Moderation Guidelines for 2023 Slovak Election (Sep. 22, 2023), see Ex. 224.
414 Id.
415 TikTok internal Slovak Election Summary Report 2023, see Ex. 211.
416 Id.
417 TikTok internal Content Moderation Guidelines for 2023 Slovak Election (Sep. 22, 2023), see Ex. 224.
418 List of problematic accounts on Slovak TikTok, see Ex. 198; see Email from TikTok staff to European
Commission staff (Sep. 26, 2023), Ex. 136; Slovak Interior Ministry, Analysis of harmful content on Slovak
language TikTok (Sep. 2023), Ex. 130.
104
accounts, with follower counts ranging from 1,000 to 120,000.419 It noted that many of the
accounts contained “Slovak political content” and in some cases explicitly noted that the flagged
content complied with TikTok’s global Community Guidelines.420 The European Commission
also listed the “top 5 problematic accounts” and ten “examples of problematic content/posts.”421
While some of the content flagged was genuinely objectionable, the European Commission
requested censorship of the following accounts on the basis of their political speech: - An account with content that “intensifies the distrust in institutions”;
- An “[a]ccount focusing on Slovak domestic politics . . . despite most of the
content being aggressive, it most probably complies with the Community
standards”; - A “well-known and popular Slovak account” where “most of the content is non-
problematic and focused on entertainment,” but “has been sharing disinformation
about immigrants, Covid-19, and supported Vladimir Putin.” The spreadsheet
notes that the account operator, Adrian Figo, “lately announced plans for his own
political party”; - An account that “shares conspiracies and political videos, but also humour, which
is often political and subconciously [sic] delivers a message”; - An account that “tackles Slovak politics, often in a misleading way”;
- An account where “videos try to discredit representatives of previous government,
which would not be a problem per se, but often works with conspiration [sic]
narratives, such as invasion of immigrants, or tries to discredit institutions”; - An account that “supports Communist party of Slovakia”; and
- An account with “a post, which links vaccination to deaths of celebrities. It does
not directly claim that vaccination caused death, however, it posts pictures of dead
celebrities, then shows the word ‘vaccinated,’ combined with display of official
government campaign to increase vaccination rates.”422
TikTok’s post-election summary report noted that it banned 19 of these accounts in direct
response to the Commission’s request—five of them for “spreading hate.”423 Sixteen accounts
flagged by the European Commission had “none [sic] or very low violations,” including
“satirical accounts focused on politics.”424 Other accounts were placed on TikTok’s “watchlist,”
419 List of problematic accounts on Slovak TikTok, see Ex. 198.
420 Id.
421 Id.
422 Id.
423 TikTok internal Slovak Election Summary Report 2023, see Ex. 211.
424 Id.
105
while some accounts had already been censored before the Commission’s intervention.425 From
these facts, it is clear that political content on TikTok was censored ahead of the Slovak election
at the European Commission’s request.
Altogether, the scope of DSA-mandated censorship ahead of the 2023 Slovak election is
deeply concerning. Mindful of the DSA’s requirements that platforms “mitigate” so-called “hate
speech,”426 TikTok systematically censored conservative views on gender issues. Then, in the
days before the election, the European Commission requested additional censorship of popular
accounts expressing views on migration, the COVID-19 pandemic, and other major topics in
Slovak domestic politics—accounts that the European Commission conceded were engaged in
political debate ahead of the election. The European Commission’s campaign was nothing less
than an effort at narrative control. And it would soon be replicated across Europe.
iii. TikTok censored similar political speech ahead of the 2023 Polish
parliamentary election.
Evidence shows that TikTok regularly instituted aggressive censorship policies ahead of
other elections like it did in Slovakia. TikTok’s internal content moderation guidance for the
2023 Polish parliamentary election was quite similar to the Slovak one.427 In some cases, it even
went further—the Polish censorship guide included the claim that “the government is trying to
demobilize voters by using covid lockdowns” as a “conspiracy theory” to be “control[led].”428
TikTok’s content moderation guide for the Polish elections called for censorship of political
speech related to the COVID-19 pandemic.
425 Id.
426 Digital Services Act, supra note 26, Art. 35.
427 TikTok internal Content Moderation Guidelines for 2023 Polish Election (Sep. 10, 2023), see Ex. 222.
428 Id.
106
iv. European Commission regulators have pressured platforms to censor
content ahead of major European elections since the DSA’s enactment.
The Slovak elections were just the opening act. Between 2023 and 2025, the Commission
engaged with platforms and pressured them to aggressively censor content ahead of national
elections in Moldova, the Netherlands, France, Ireland, and Romania. Unsurprisingly, the
Commission was even more aggressive ahead of the June 2024 European Union parliamentary
elections—where a shift in power could have swept out the architects of the EU’s global
censorship campaign.
107
- The Netherlands (2023 and 2025)
The Netherlands had parliamentary elections in 2023 and 2025. Both times, the European
Commission encouraged platforms to censor additional content. Ahead of the 2023 Dutch
election, the European Commission hosted a meeting with TikTok “to discuss [its] risk
assessment and mitigation measures for the Dutch national elections,” including “measures to
mitigate the risk of mis/disinformation for these particular elections.”429 This meeting offered the
European Commission a forum to put censorship pressure on the platform. Before this election,
the European Commission also made the Dutch Interior Ministry a “trusted flagger” entitled to
make priority censorship requests under the DSA.430 This is an obvious conflict of interest:
ahead of an election where Dutch voters could have voted the Interior Ministry’s leadership out
of power, the European Commission specifically empowered the Interior Ministry to make
special censorship demands.
The European Commission summoned TikTok to a meeting about its censorship measures ahead
of the 2023 Dutch election.
The Dutch competition regulator, the Authority for Consumers and Markets (ACM), held
a similar meeting ahead of the 2025 Dutch parliamentary election. Six weeks before Election
Day, ACM held a “roundtable on elections in the context of the Digital Services Act” with
European Commission regulators, companies including Alphabet, Meta, Microsoft, TikTok, and
X, and censorious NGOs.431 ACM and the European Commission asked platforms about “how
content is prioritized” in algorithms, existing steps to censor “harmful content,” and “what
structural improvements”—meaning additional censorship measures—were “needed before the
elections[.]”432 ACM and the European Commission clearly expected platforms to take
significant censorship steps ahead of the election.
429 Emails between TikTok staff and European Commission staff (Nov. 6, 2023), see Ex. 230.
430 Non-Paper: TikTok’s approach to election preparedness across the EU (Nov. 8, 2023), see Ex. 231; see Digital
Services Act, supra note 26, Art. 22.
431 Agenda for Roundtable on Elections in the Context of the Digital Services Act (Sep. 15, 2025), see Ex. 277;
Email from Dutch regulators to platforms (Sep. 3, 2025), see Ex. 276.
432 Agenda for Roundtable on Elections in the Context of the Digital Services Act (Sep. 15, 2025), see Ex. 277.
108
The European Commission and Dutch authorities pressured platforms to take additional
censorship steps ahead of the 2025 Dutch election. - European Parliament (2024)
Elections for the European Parliament, the legislature of the European Union, took place
from June 6 to 9, 2024. In this case, the European Commission’s pre-election censorship
campaign was particularly problematic because of an inherent conflict of interest: the European
Parliament elects the President of the European Commission and confirms the European
Commissioners.
109
In the month before the election, European Commission regulators summoned platforms
for at least two meetings about election-specific content moderation measures. “At the outset” of
these “DSA roundtables,” the European Commission warned platforms that it was “actively
monitoring” them and would not hesitate to “take enforcement actions” if platforms did not
sufficiently censor content.433 With that warning in mind, platforms presented to the European
Commission “the specific actions and changes that they have made in relation to election
readiness and that they are implementing in compliance with the DSA election guidelines.”434
Under threat of regulatory retaliation, the world’s largest social media platforms and search
engines made “specific” censorship “changes” for the EU elections. They aligned their content
moderation rules with the censorious DSA Election Guidelines to suppress content opposing the
EU’s ruling regime—which was on the ballot. Moreover, because, platforms’ election content
moderation rules generally apply worldwide, these EU-mandated censorship efforts could have
resulted in censorship of American speech ahead of the 2024 presidential election. 435
The European Commission threatened platforms with regulatory reprisal if they did not take
additional censorship steps ahead of the 2024 EU election.
433 Minutes of Election Readiness DSA Roundtables (June 4, 2024), see Ex. 245.
434 Id.
435 See Internal Meta readout of Roundtable on DSA Elections Guidelines (Mar. 1, 2024), Ex. 243.
110
European Commission Vice President Vera Jourova also met with major tech platforms,
including TikTok, in California less than two weeks before the elections. In her meeting with
TikTok, she commended the censorship efforts it had taken under the Disinformation and Hate
Speech Codes and “encouraged” TikTok to support the integration of these Codes into the formal
DSA framework.436 She also shared “great concerns” about “fringe parties” posting “incredible
videos” on TikTok—a clear indication that she sought to use the DSA to silence political
speech.437 Yet Jourova disavowed specific knowledge of the European Commission’s DSA
investigations, stating that she was focused on her own “narrative” rather than the facts and
“details” actually in the evidentiary record of her investigations.438 And she previewed her
priorities for the rest of 2024, stating that she wanted to begin “asking ourselves whether we are
really filling the online space with positive information” without defining what “positive
information” was or how government could fairly arbitrate such a standard. 439
European Commission Vice President Jourova commended TikTok on its censorship initiatives
while admitting that she leads DSA investigations based on “the narrative,” not the facts.
436 Readout of meeting between TikTok staff and European Commission Vice President Vera Jourova (May 28,
2024), see Ex. 10.
437 Id.
438 Id. (emphasis omitted).
439 Id.
111
After the election, the European Commission graded platforms on their censorship
performance. The Commission required platforms to submit a “detailed post-election report with
quantitative data particularly around election products, content/entity actions and other
meaningful performance metrics” for the period preceding the EU election.440 These reports
allowed the Commission to see how platforms’ additional censorship steps worked in practice to
limit the visibility of disfavored content. TikTok, for example, reported to the Commission that it
“detect[ed] and remove[d] . . . misinformation narratives . . . around migration, climate change,
security and defence and LGBTQ rights.”441 Indeed, TikTok censored over 45,000 pieces of
alleged “misinformation,” including clear political speech, during the EU election period under
stringent content moderation policies adopted under threat of retaliation from the Commission.442
TikTok’s post-election “confidential report” stated that it removed more than 45,000 alleged
“misinformation” posts under pressure from the Commission ahead of the EU election.
In a post-election meeting with platforms, the European Commission also asked what “future
improvements you plan to employ for the various elections in Member States taking place this
year,” implying that it expected additional censorship actions for future European national
elections.443
Altogether, this was an unprecedented effort to interfere with European citizens’ right to
make free and informed decisions about the future of the EU. These voting decisions directly
affected the future of the EU’s decade-long global censorship campaign. It is a tale as old as
time: give the state the power to censor speech, and it uses that power to silence dissent and
strengthen its own grip on power. And in this case, the effects may not have been limited to the
EU.
440 Readout from the third European Commission roundtable on parliamentary elections (July 10, 2024), see Ex.
250; see TikTok 2024 European Parliament Elections Confidential Report (Sep. 24, 2024), Ex. 253.
441 TikTok 2024 European Parliament Elections Confidential Report (Sep. 24, 2024), see Ex. 253.
442 Id.
443 Invitation from the European Commission to a roundtable on election readiness (July 3, 2024), see Ex. 249.
112 - France (2024)
France held legislative elections in June and July 2024. Once again, the European
Commission hosted a meeting with platforms shortly before the elections.444 This time, the
meeting was a “Q&A” between platforms and the European Commission “on specific electoral
issues.”445 At the European Commission’s request, “Google, Meta, Microsoft, TikTok, and X”
presented on their “preparatory work for the French election,” including their approach to
censoring “disinformation.”446
The European Commission summoned platforms to defend their censorship measures ahead of
the 2024 French election.
444 Agenda for a European Commission meeting on the French parliamentary elections (June 21, 2024), see Ex. 247.
445 Id.
446 Id.
113 - Moldova (2024)
One month before Moldova’s 2024 presidential election, the European Commission’s
“EU Support Hub” for Moldova hosted a two-day summit with platforms on “addressing
disinformation risks through digital services regulation,” with speakers including the Prime
Minister of Moldova.447 Discussion included “best practices” for censoring alleged
disinformation, with a specific focus on the DSA.448 The presidential election included a
candidate from the same party as Moldova’s Prime Minister,449 raising significant conflict of
interest concerns. Moreover, Moldova is not yet an official Member State of the European
Union, raising further questions about why a DSA-discussion in Moldova took place when the
Commission claims that the DSA is not extraterritorial and does not apply to Moldova.450
The European Commission hosted a “summit” on disinformation with platforms ahead of the
2024 Moldovan presidential election.
447 Agenda for the 11th Meeting of the EU Support Hub for International Security and Border Management in
Moldova on “Countering Foreign Information Manipulation and Interference” (Sep. 18, 2024), see Ex. 251.
448 Id.
449 See Jakub Pienkowski, Party of Action and Solidarity Gains Full Power: The Opening Record of the Pro-
European Government in Moldova, POLISH I NSTITUTE OF I NT ’ L A FFAIRS (Sep. 13, 2021).
450 See generally The Republic of Moldova, CENTRE FOR M EDIA PLURALISM AND M EDIA FREEDOM ,
https://cmpf.eui.eu/country/moldova/ (last accessed Jan. 31, 2026) (“Moldova is not subject to the Digital Services
Act (DSA).”).
114 - Ireland (2024 and 2025)
Most major technology platforms have their European headquarters in Dublin, making
the outcome of Irish elections particularly important to the European Commission’s tech agenda.
For the same reason, Ireland’s media regulator, the Coimisiun na Mean, is one of the most
powerful in the world. Both the Commission and the Irish regulator engaged with platforms
ahead of Ireland’s 2024 parliamentary elections and 2025 presidential election, and Irish officials
have noted how they “work closely with the Commission in the enforcement of the DSA.”451
Two weeks before the 2024 election, Coimisiun na Mean hosted a “DSA Election
Roundtable” alongside Commission regulators.452 Ahead of the meeting, the Irish regulator sent
platforms a list of questions they should be prepared to answer, including several about
platforms’ DSA risk assessment, cooperation with left-wing NGOs and biased fact-checkers, and
platforms’ censorship “processes [and] procedures.”453
The Irish media regulator asked platforms about their censorship “processes [and] procedures”
ahead of the 2024 election.
451 Readout of “Protecting The 2024 Elections: From Alarm to Action” (Mar. 8, 2024), see Ex. 244.
452 Emails between Meta staff and Irish regulators (Nov. 7, 2024), see Ex. 256.
453 Coimisiun na Mean Questionnaire on DSA Risk Assessment for Irish Elections, see Ex. 281.
115
After the meeting, the Irish regulator followed up and asked Meta additional questions
about its “media literacy initiatives,” noting that such initiatives were listed as a best practice
under the not-so-voluntary DSA Election Guidelines.454 Each of these interactions created
additional censorship pressure on platforms. Platforms, after a decade of the European
Commission’s harassment, knew what it wanted—more censorship. They knew that the
European Commission and the Irish media regulator would ask about their censorship measures
during these pre-election meetings. And they knew that the European Commission could—and
would—retaliate against them if they failed to take adequate censorship measures from the
perspective of the Commission.
Similarly, ahead of the 2025 Irish presidential election, the Irish media regulator hosted a
“Digital Services Act Election Roundtable” with the European Commission and platforms.
During the meeting, the European Commission warned platforms that the DSA Election
Guidelines required “measures to be taken” ahead of the election, including “reinforcing internal
processes” regarding content moderation.455 During the roundtable portion of the event,
regulators asked platforms specifically “what measures [they had] put in place.”456 Meta
responded that it had updated its “election risk assessment” and “mitigations,” meaning that it
put in place additional censorship steps—though it did not specify exactly what steps those
were.457 Google emphasized its use of AI tools to detect misinformation, while Microsoft stated
that it removed misinformation that violated its policies and “deranked” (i.e., reduce the
content’s visibility) it if the content did not violate its policies.458
European Commission regulators reminded platforms of their censorship obligations ahead of
the 2025 Irish presidential election.
454 Emails between Meta staff and Irish regulators (Nov. 18, 2024), see Ex. 257.
455 Readout of Coimisiun na Mean Irish Presidential Election Roundtable (Sep. 24, 2025), see Ex. 279.
456 Id.
457 Id.
458 Id.
116 - Rapid Response Systems
Under the auspices of the Disinformation Code and the DSA Election Guidelines, the
Commission has activated a censorship apparatus known as a “rapid response system” ahead of
several recent European elections.459 Under these “rapid response systems,” European
Commission-approved fact-checkers are given the ability to make priority censorship requests in
the weeks before and after major elections.460 These so-called fact-checkers are invariably left-
wing and pro-censorship—anything but politically neutral.461 Moreover, the requirement that
these fact-checkers be approved by the European Commission creates a clear structural incentive
for the participants to censor Euroskeptic opinion and content that undermines the Commission’s
preferred political narratives. The European Commission has activated rapid response systems
ahead of the 2024 French legislative election,462 the 2024 Moldovan presidential election,463 the
2024-2025 Romanian presidential election,464 and the 2025 German legislative election.465
v. Allegations of pervasive Russian interference in the 2024 Romanian
presidential election, which was annulled after a populist candidate won
the first round of voting, are undermined by internal platform documents.
The first round of the 2024 Romanian presidential election took place on November 24,
2024.466 Independent populist candidate Calin Georgescu won an upset victory, advancing to a
runoff against centrist candidate Elena Lasconi.467 However, in between the first and second
rounds of voting, the Constitutional Court of Romania annulled the election after Romania’s
intelligence service alleged that Russia boosted Georgescu’s campaign on social media.468
Subsequent public reporting has called this narrative into question, finding that another
Romanian political party may have been behind the alleged Russian social media campaign.469
459 See DSA Election Guidelines, supra note 45, § 3.7
460 See Code of Practice Signatories implement the Code’s commitment for a Rapid Response System ahead of EP
elections, D ISINFORMATION C ODE T RANSPARENCY CENTER , https://disinfocode.eu/eu-elections-2024/ (last accessed
Jan. 26, 2025).
461 See DSA Censorship Report I, supra note 3, at 29.
462 Emails from European Commission staff to members of the Code of Practice Task Force (June 24, 2024), see Ex.
248.
463 Emails between European Commission staff and TikTok staff (Oct. 23, 2024), see Ex. 255; Emails from
European Commission staff to platforms (Sep. 24, 2024), see Ex. 252.
464 Emails between European Commission staff and platforms (Nov. 29, 2024), see Ex. 263.
465 Emails between European Commission staff and TikTok staff (Feb. 3, 2025), see Ex. 271.
466 Stephen McGrath, Romanian court orders recount of the 1st round of the presidential vote, won by a far-right
outsider, AP (Nov. 28, 2024).
467 Thomas Grove & Alan Cullison, Romania Scraps Election After Russian Influence Allegations, WALL . ST . J.
(Dec. 6, 2024).
468 Id.
469 Rowan Ings, The TikTokers accused of triggering an election scandal, BBC (Apr. 30, 2025) (“Authorities still
haven’t provided any concrete evidence of Russian interference in the election . . . . the Romanian Tax Authority
revealed that the [alleged Russian] campaign was paid for by the centre-right National Liberal Party (PNL).”)
117
Nonetheless, the Romanian election authority barred Georgescu from running in the do-over
election in May 2025.470
Internal TikTok documents and communications with the European Commission and
Romanian authorities further undermine the narrative of Russian interference. TikTok’s internal
intelligence teams consistently assessed that Russia did not conduct a coordinated influence
operation to boost Georgescu’s campaign and repeatedly shared that assessment with European
Commission and Romanian authorities.471 Internal documents also show that ahead of the
annulled election, Romanian regulators empowered under the DSA worked to silence content
supporting populist and nationalist candidates, including through global content removal
orders.472 After the election, as the yet-unsubstantiated allegations of Russian interference began
to circulate, the Commission acted quickly, opening a formal DSA investigation into TikTok’s
moderation practices for political speech.473
Ahead of the 2024 election, Romanian authorities repeatedly made content takedown
requests outside of the formal DSA process, using expansive interpretations of their own power
to mandate removals of political content. As TikTok later told the European Commission, the
platform was “wary of the very informal approach” to takedown requests “adopted by the
[Romanian Elections Authority] in the context of the Romanian elections, in particular the
potential for political influence on the process and/or the unjustified removal of legal content
(such as political speech).”474 TikTok noted that the legal arguments accompanying these
removal demands—when there were any at all—“sought to convey a very broad interpretation”
of the election authority’s power.475 For example, Romanian authorities asked TikTok “to
remove content on the basis that it was ‘disrespectful and insults the PSD party’”476—a left-wing
party that was part of the ruling coalition in Romania’s parliament at the time. Between the first
round of the election and its annulment, the Romanian orders were even more aggressive:
regulators told TikTok that “all materials containing Calin Georgescu images must be
removed.”477 These actions—silencing critics of one candidate and supporters of another—are
profoundly anti-democratic. TikTok agreed, refusing to remove private citizens’ pro-Georgescu
posts on free speech grounds.478
470 Sarah Rainsford & Laura Gozzi, Final ruling bars far-right Georgescu from Romanian vote, BBC (Mar. 11,
2025).
471 See TikTok Response to Commission RFI (Dec. 7, 2024), Ex. 266; TikTok Response to Commission RFI (Dec.
13, 2024), Ex. 268.
472 See Emails between Romanian regulators, European Commission staff, and TikTok staff (Nov. 28, 2024), Ex.
264.
473 See Supervision of the designated very large online platforms and search engines under the DSA, E UROPEAN
COMM ’ N, https://digital-strategy.ec.europa.eu/en/policies/list-designated-vlops-and-vloses#ecl-inpage-tiktok (last
visited Jan. 29, 2026).
474 TikTok Response to Commission RFI (Dec. 13, 2024), see Ex. 268.
475 Id.
476 Id. (emphasis in original)
477 Letter from TikTok to the European Commission (Nov. 29, 2024), see Ex. 265 (emphasis omitted).
478 Id.
118
TikTok raised concerns about censorship of political content by Romanian regulators ahead of
the 2024 presidential election.
These aggressive takedown requests even included a global takedown order for certain
pro-Georgescu content.479 In response to a Romanian court order, TikTok geo-blocked videos
that were alleged to be out of compliance with campaign finance transparency requirements.480
Subsequently, the Romanian elections regulator stated that “the [Romanian court] decision”
mandating removal of the content “is mandatory not only in Romania.”481 TikTok appears to
479 Emails between Romanian regulators, European Commission staff, and TikTok staff (Nov. 28, 2024), see Ex.
264.
480 Id.
481 Id.
119
have stood firm and refused to remove content globally for non-compliance with local Romanian
law—much to the chagrin of Romanian regulators.482 After the election, the Romanian
Intelligence Service complained that while TikTok “block[ed] visual access to” certain posts
“from the territory of Romania, they still remaining [sic] visible in other state [sic] and being
[sic] possible to be distributed.”483 Global takedowns violate the sovereignty of the United States
and every other country on earth, allowing foreign judges and regulators to censor content where
they have no jurisdiction to do so.
Romanian authorities claimed the power to issue global takedown orders.
On December 6, the Romanian Constitutional Court annulled the first round of the
presidential election and canceled the runoff vote based on the Romanian Intelligence Service’s
allegation that Russia conducted a TikTok campaign to artificially boost support for
Georgescu.484 TikTok’s internal threat intelligence team did not concur in this assessment,
repeatedly informing Romanian authorities and the Commission that it lacked evidence to
support their allegations.485 In submissions to the European Commission, TikTok stated that it
detected three coordinated influence operations (CIOs) on the platform during the Romanian
election period—none of which emanated from Russia.486 Only one of these CIOs sought to
promote Georgescu’s campaign.487 It operated from Romania and amassed fewer than 2,000
followers.488
482 Romanian Information Service: Note No. 2 to the Romanian Supreme Council for National Defense (2024), see
Ex. 236.
483 Id.
484 Thomas Grove & Alan Cullison, Romania Scraps Election After Russian Influence Allegations, WALL . ST . J.
(Dec. 6, 2024); see also Romanian Ministry of Internal Affairs: Information Note (2024), Ex. 238; Romanian
Information Service: Note No. 1 to the Romanian Supreme Council for National Defense (2024), Ex. 237; Romanian
Information Service: Note No. 2 to the Romanian Supreme Council for National Defense (2024), Ex. 236; External
Information Service Note: Analysis of national security risks generated by the actions of state and non-state cyber
actors on IT&C infrastructures, support for the electoral process (Nov. 28, 2024), Ex. 260.
485 See TikTok Response to Commission RFI (Dec. 13, 2024), Ex. 268; TikTok Response to Commission RFI (Dec.
7, 2024), Ex. 266; TikTok slide deck: Romanian Elections Platform Integrity Briefing (Nov. 28, 2024), Ex. 259.
486 TikTok Response to Commission RFI (Dec. 13, 2024), see Ex. 268; TikTok Response to Commission RFI (Dec.
7, 2024), see Ex. 266.
487 TikTok Response to Commission RFI (Dec. 13, 2024), see Ex. 268.
488 Id.
120
On November 28, four days after the election and as allegations of Russian influence
began to emerge, TikTok briefed the Romanian media regulator on its election integrity
measures.489 TikTok noted “suspicious activity around potentially undisclosed paid political
promotion videos” related to Georgescu, but did not tie this to Russia.490 Subsequent media
reports indicated that another Romanian political party orchestrated the TikTok campaign in
question.491
On December 7, TikTok made a similar statement to the European Commission, writing
that “TikTok has not found, nor has been presented with, any evidence of a coordinated network
of 25,000 accounts associated with Mr. Georgescu’s campaign”492—the key allegation by the
Romanian authorities.493
TikTok informed the European Commission that it had “not found, nor been presented with”
evidence to support Romanian authorities’ key allegation of Russian interference.
Nonetheless, the European Commission ignored TikTok’s findings and used the
opportunity to suppress legitimate political speech online. On November 29, DG-Connect
convened a roundtable on Romanian political content moderation with platforms, Romanian
489 TikTok slide deck: Romanian Elections Platform Integrity Briefing (Nov. 28, 2024), Ex. 259.
490 Id.
491 Rowan Ings, The TikTokers accused of triggering an election scandal, BBC (Apr. 30, 2025) (“The Romanian
Tax Authority revealed that the [alleged Russian] campaign was paid for by the centre-right National Liberal Party
(PNL).”).
492 TikTok Response to Commission RFI (Dec. 7, 2024), see Ex. 266.
493 See Romanian Information Service: Note No. 1 to the Romanian Supreme Council for National Defense (2024),
Ex. 237.
121
authorities, and European Commission regulators.494 As allegations of Russian interference
intensified, the European Commission also demanded information about TikTok’s political
content moderation practices, asking about “changes” to TikTok’s “processes, controls, and
systems for the monitoring and detection of any systemic risks.”495 The European Commission
used this still-unproven narrative to pressure TikTok to engage in more aggressive political
censorship. In response, TikTok informed the Commission that it would censor content with the
terms “coup” and “war”—clear political speech related to Romania’s perceived perversion of the
democratic process—“for the next 60 days to mitigate the risk of harmful narratives.”496 But that
was not enough. On December 17, the European Commission opened a formal DSA
investigation into TikTok for failing to aggressively censor content before and after the
Romanian election.497
The European Commission pressured TikTok to censor additional political content in response
to unproven allegations of Russian interference in the 2024 Romanian election.
By late December 2024, media reports citing evidence from Romania’s tax authority
found that the alleged Russian interference campaign had, in fact, been funded by another
Romanian political party.498 Yet the European Commission and Romanian authorities continued
their aggressive censorship campaign. In a February 2025 meeting, DG-Connect summoned
TikTok’s product team for a meeting on its “deceptive behavior policies and enforcement” and
“potential[ly] ineffective” DSA “mitigation” measures.499 The European Commission’s desire to
meet with TikTok’s internal product team, rather than the government affairs and compliance
staff whose job it was to manage TikTok’s relationship with the Commission, indicates that the
European Commission sought deeper influence over the platform’s internal moderation
processes.
494 Discussion Questions for Election Roundtable on Romanian Elections (Nov. 29, 2024), see Ex. 262; Invitation to
Election Roundtable on Romanian Elections (Nov. 28, 2024), see Ex. 258.
495 TikTok Response to Commission RFI (Dec. 7, 2024), see Ex. 266.
496 TikTok Report on Romanian content (Dec. 12, 2024), see Ex. 267.
497 Supervision of the designated very large online platforms and search engines under the DSA, E UROPEAN
COMM ’ N, https://digital-strategy.ec.europa.eu/en/policies/list-designated-vlops-and-vloses#ecl-inpage-tiktok (last
visited Jan. 29, 2026).
498 See Denis Cenusa, Romanian liberals orchestrated Georgescu campaign funding, investigation reveals, BNE
I NTELLINEWS (Dec. 22, 2024).
499 Emails between European Commission staff and TikTok staff (Feb. 12, 2025), see Ex. 272.
122
The European Commission continued harassing TikTok even after claims of Russian interference
in the Romanian election were undermined.
The same month, Romanian authorities met with TikTok to discuss its censorship
“measures” and “compliance” with the censorious DSA Election Guidelines ahead of the
rescheduled May election.500 One month later, in March, Romanian regulators held a roundtable
with major platforms, misinformation pseudoscientists, focusing on “mitigation measures
envisaged or already put in place by [platforms], in line with the recommendations included in
the” DSA Election Guidelines.”501 The European Commission and Romanian authorities were
committed to maintaining aggressive censorship measures ahead of the rescheduled elections,
even though their previous allegations of Russian influence had not been substantiated.
Throughout the Romanian electoral process, NGOs empowered by the European
Commission to make priority censorship requests—either as DSA Trusted Flaggers or through
the Commission’s Rapid Response System—made politically biased content removal demands.
The Bulgarian-Romanian Observatory of Digital Media, which is funded by the EU,502 sent
TikTok spreadsheets containing hundreds of censorship requests in the days after the first round
of the initial election.503 While some of the content may have been genuinely objectionable,
much of the flagged content appears to be standard pro-Georgescu and anti-progressive political
speech, including political content related to Georgescu’s positions on environmental issues and
500 Draft agenda for meeting between TikTok and Romanian regulators (Feb. 27, 2025), see Ex. 274.
501 Internal email among Meta staff (Feb. 24, 2025), see Ex. 273; Agenda for Roundtable – 2025 Romanian
Presidential Elections (Mar. 3, 2025), see Ex. 275.
502 See Fact Checking, BULGARIAN-ROMANIAN O BSERVATORY OF D IGITAL M EDIA, https://brodhub.eu/en/fact-
checking/ (last visited Jan. 29, 2026).
503 Spreadsheet of content about the Romanian election flagged for TikTok, see Ex. 282; Spreadsheet of content
about the Romanian election flagged for TikTok, see Ex. 283; Spreadsheet of content about the Romanian election
flagged for TikTok, see Ex. 284; Spreadsheet of content about the Romanian election flagged for TikTok, see Ex.
285.
123
Romania’s membership in the Schengen Area, the EU’s internal system of open borders.504
Similarly, Romanian fact-checker Funky Citizens flagged at least 334 TikTok videos for removal
during November and December 2024.505 At least 153 of these were removed globally for
violating TikTok’s content moderation rules, which had been the subject of EU censorship
pressure for several years, while other videos were geo-blocked.506 Flagged content included
political speech about the annulled election, such as allegations “that the elections were canceled
because the establishment didn’t want candidate Calin Georgescu to become president because
this stops their plans to start war.”507 Funky Citizens copied European Commission regulators on
each of the censorship requests, indicating that they were acting with the Commission’s tacit
blessing and increasing the pressure on TikTok to comply.508
So-called fact-checkers empowered by the European Commission urged platforms to censor
political speech about the annulment of the Romanian election.
504 Spreadsheet of content about the Romanian election flagged for TikTok, see Ex. 284.
505 Emails among TikTok, the European Commission, and third parties (Dec. 20, 2024), see Ex. 185; Emails among
TikTok, the European Commission, and third parties (Dec. 20, 2024), see Ex. 186; Emails among TikTok, the
European Commission, and third parties (Dec. 20, 2024), see Ex. 270; Emails among TikTok, the European
Commission, and third parties (Dec. 19, 2024), see Ex. 183; Emails among TikTok, the European Commission, and
third parties (Dec. 19, 2024), see Ex. 184; Emails among TikTok, the European Commission, and third parties (Dec.
12, 2024), see Ex. 181.
506 Id.
507 Emails among TikTok, the European Commission, and third parties (Dec. 20, 2024), see Ex. 185.
508 Emails among TikTok, the European Commission, and third parties (Dec. 20, 2024), see Ex. 185; Emails among
TikTok, the European Commission, and third parties (Dec. 20, 2024), see Ex. 186; Emails among TikTok, the
European Commission, and third parties (Dec. 20, 2024), see Ex. 270; Emails among TikTok, the European
Commission, and third parties (Dec. 19, 2024), see Ex. 183; Emails among TikTok, the European Commission, and
third parties (Dec. 19, 2024), see Ex. 184; Emails among TikTok, the European Commission, and third parties (Dec.
12, 2024), see Ex. 181.
124
The European Commission has repeatedly and aggressively used its DSA powers to
silence conservative speech—especially ahead of major European elections. The EU Internet
Forum, initially created to stop online terrorist recruitment, now labels conventional conservative
political speech as “violent right-wing extremism” and pressures platforms to censor it, going so
far as to give them a handbook on how to do so. The EUIF has undertaken all of this even though
the EUIF admits that the content it targets violates neither European law nor platform rules.
Since the DSA came into force, political censorship has only intensified. The European
Commission has issued de facto binding political censorship rules for European elections and
pressured platforms to aggressively censor political speech ahead of at least nine major European
elections since 2023. This culminated in Romania in 2024, where the presidential election was
annulled based on allegations of Russian social media interference that are disputed by media
reports and, here for the first time, internal platform documents. And because platforms institute
a common set of election practices worldwide, EU dictates may have caused censorship of U.S.
political speech ahead of the 2024 presidential election.
VI. O PPORTUNITIES FOR REFORM E XIST , BUT THE EUROPEAN C OMMISSION CONTINUES
TO USE THE DSA AS A H EAVY -H ANDED CENSORSHIP T OOL .
More than a decade into its censorship project and more than three years after the DSA’s
passage, the European Commission has had ample opportunity to stop these censorship efforts
and recommit itself to fundamental free speech principles. But at every juncture, Europe has
instead sought even stricter control of political discourse worldwide. It is not too late—
opportunities exist for critical reforms to protect free expression in Europe and around the globe.
But every indicator shows the European Commission moving in the opposite direction,
continuing its efforts to control online debate within and outside of the European Union.
A. The European Commission fined X €120 million for defending free speech and open
discourse online.
After Elon Musk’s purchase of the platform then called Twitter, it became the first major
social media platform to recommit itself to free expression. The platform, now called X,
discarded biased, left-wing fact-checkers in favor of a system called Community Notes, in which
users can append disclaimers to misleading or inaccurate posts.509 For this reason, X left the
European Commission’s Disinformation Code, which requires platforms to use these third party
fact-checkers. 510 X reinstated users previously banned for their political speech, including
509 See Vishwam Sankaran, Twitter launches Community Notes feature that lets people add context to tweets, T HE
I NDEPENDENT (Dec. 12, 2022). Meta followed in X’s footsteps with a similar change in 2025. Kate Conger, Meta
Turns to Community Notes, Mirroring X, N.Y. T IMES (Jan. 7, 2025).
510 Francesca Gillett, Twitter Pulls out of Voluntary EU Disinformation Code, BBC (May 27, 2023); Disinformation
Code, supra note 124, § VII.
125
President Trump,511 and refused to comply with foreign censorship orders, including global
content removal orders, in Brazil and Australia.512
Naturally, these free-speech actions collided with the EU’s global censorship efforts.
From the very beginning, the Commission has targeted X for its defense of free expression. After
X left the allegedly voluntary Disinformation Code in May 2023, then-Commissioner Thierry
Breton threatened the platform with retaliation under the DSA, warning X “you can run but you
can’t hide.”513 Once the DSA became enforceable in August 2023, Breton followed through on
his threat. In October 2023, less than two months after the DSA’s obligations became legally
binding, the European Commission regulators opened an investigation into X’s use of
Community Notes instead of fact-checkers.514 Since then, the European Commission has opened
multiple additional investigations of X, 515 and in August 2024, Breton threatened X with
retaliation under the DSA for hosting a U.S.-based interview with President Trump ahead of the
2024 U.S. presidential election.516
The European Commission’s campaign against X culminated in December 2025 with a
€120 million (approximately $140 million) fine—nearly six percent of X’s global revenue.517 On
its own terms, the decision is ridiculous, punishing X for, among other things, “misappropriating
the historical meaning” of blue checkmarks on Twitter by changing its business model and
offering them to premium X subscribers.518 The European Commission penalized X for at best
minor violations like having an “ad repository”—a storehouse of ads previously hosted on the
site—that produces results in spreadsheets, rather than embedded into the website.519 And the
Commission asserted expansive extraterritorial jurisdiction, claiming that the DSA could require
X, an American company, to hand American data over to researchers around the world.520 The
proffered offenses are so minor that there is only one conclusion: this is all pretextual. X,
because of its commitment to free expression, has been in the European Commission’s crosshairs
from the very beginning. The Commission’s December 2025 decision is the culmination of this
campaign, and a clear warning to every platform: resistance to the Commission’s censorship
efforts will be met with severe regulatory retaliation. And the threat to X remains: the
511 Shannon Bond, Elon Musk allows Donald Trump back on Twitter, NPR (Nov. 19, 2022).
512 See STAFF OF THE H. COMM. ON THE JUDICIARY AND THE SELECT SUBCOMM . ON THE WEAPONIZATION OF THE
FED. G OV’ T OF THE H. COMM. ON THE JUDICIARY, 118 TH CONG., T HE A TTACK ON FREE SPEECH ABROAD AND THE
B IDEN A DMINISTRATION’ S S ILENCE : T HE CASE OF BRAZIL (Comm. Print Apr. 17, 2024); Letter from Rep. Jim
Jordan, Chairman, H. Comm. on the Judiciary, to Ms. Julie Inman Grant, Australian eSafety Commissioner (Nov.
18, 2025).
513 Thierry Breton (@ThierryBreton), X (May 26, 2023, 4:30 PM),
https://x.com/ThierryBreton/status/1662194595755704321?lang=en.
514 Press Release, European Comm’n, The Commission sends request for information to X under the Digital Services
Act (Oct. 11, 2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4953.
515 Supervision of the designated very large online platforms and search engines under the DSA, E UROPEAN
COMM ’ N, https://digital-strategy.ec.europa.eu/en/policies/list-designated-vlops-and-vloses#ecl-inpage-tiktok (last
visited Jan. 29, 2026).
516 Letter from Mr. Thierry Breton, Comm’r for Internal Mkt., European Comm’n, to Mr. Elon Musk, Owner, X
(Aug. 12, 2024).
517 X Decision, supra note 61; see also House Judiciary GOP (@JudicaryGOP), X (Jan. 28, 2026, 4:09 PM),
https://x.com/JudiciaryGOP/status/2016619751183724789.
518 X Decision, supra note 61, at 27.
519 Id. at 57.
520 Id. at 93-94.
126
Commission’s decision states that X faces a ban in the EU if X fails to give in to the
Commission’s censorship demands within 90 days.521
i. The European Commission fined X €45 million for innovating the way
blue checkmarks are awarded.
First, the Commission fined X €45 million for changing the way blue checkmarks are
awarded on the platform.522 Before Musk’s purchase of the platform, blue checkmarks were
awarded to verified accounts deemed notable by Twitter staff.523 X changed this policy in late
2022 and early 2023 to great public fanfare, introducing a regime by which blue checkmarks
were awarded to users who pay for a premium experience on X.524 The EU claims that this
change “deceives” X users and “impair[s] their ability to make free and informed decisions” by
“misappropriating the historical meaning of the verification checkmarks” and “misappropriating
the meaning of cross-industry visual standards.”525 The Commission also took issue with X’s
algorithmic amplification of paying users, claiming that this “deceiv[es] the recipients of the
service about the significance of their content.”526
The European Commission accused X of violating the DSA by “misappropriating” the meaning
of a blue checkmark.
The European Commission’s allegation that the new blue checkmark system is
“dece[ptive]” is profoundly unpersuasive.527 As X noted to the Commission, “the average user of
X is aware of the meaning of” blue checkmarks “due to extensive media coverage on the
changed policy, public announcements about this change . . . and the explanatory webpages
where the provider of X elaborates on how the new policy differs from the historical verification
program of Twitter.”528 The Commission’s allegation of deception assumes, quite simply, that
the average citizen is ignorant and stupid. The Commission’s own example proves the point. The
decision uses a blue-checkmark X account purporting to be “Donald Duck”—the fictional
521 Id. at 165.
522 Id. at 25-51.
523 Id. at 25.
524 See Caitlin O’Kane, Twitter is officially ending its old verification process on April 1. To get a blue check mark,
you’ll have to pay., CBS N EWS (Mar. 24, 2023); Michael Dobuski, Twitter begins phasing out legacy ‘blue check
marks’ in latest platform change, ABC N EWS (Apr. 1, 2023).
525 X Decision, supra note 61, at 27, 33.
526 Id. at 34-36.
527 Id. at 27.
528 Id. at 42.
127
Disney character—as an example of X’s allegedly deceptive practices.529 But the Commission’s
example proves the opposite point. For one to be deceived by the blue-check Donald Duck
account—to think that the account had been verified under the pre-2023 rules—he would have to
assume that (1) this fictional duck had come to life, and (2) it was a real X user. No rational
person would look at the blue-check Donald Duck account and think that was the case. Instead,
they would recognize that the standards by which blue checkmarks are applied had changed and
continue using X with that knowledge.
The European Commission’s decision fining X gave this ‘Donald Duck’ account as an example
of X’s allegedly deceptive blue-checkmark policy.
The European Commission’s allegation that algorithmic amplification of paying users
“deceiv[es] the recipients of the service about the significance of their content” is similarly
529 Id. at 29.
128
weak.530 The number of reposts, likes, and views for any piece of content on X is prominently
displayed just below the body of the post. In other words, every single piece of content on X is
displayed alongside an objective measurement of its “significance.”531 X is also open about the
fact that paying users’ posts receive an algorithmic boost.532 In that context, the notion that X is
deceiving users about the significance of content is laughable.
The European Commission’s arguments are so weak that they seem clearly pretextual,
particularly in light of the Commission’s history of targeting X. But even if one takes them
seriously, they reveal something deeply troubling about the DSA and European regulators. By
the Commission’s own understanding, the blue checkmark-related fines are a penalty for
attempted innovation. The argument that X is out of compliance with the DSA for
“misappropriating the historical meaning of the verification checkmarks” and “misappropriating
the meaning of cross-industry visual standards”533 amounts to a claim that X is out of
compliance with the DSA because it dares to do things differently than other social media
companies.
ii. The European Commission fined X €35 million for bogus violations
related to its ad repository.
Article 39 of the DSA requires platforms to keep a “repository” of all ads hosted on the
platform within the EU over the last year. X took good faith steps to build an effective ad
repository that protected sensitive user data while complying with the major requirements of
Article 39—yet it was still fined €35 million. The Commission’s fined X for, among other
reasons, wrongly interpreting a vague provision related to the searchability of the ad repository
and for producing data in a spreadsheet, rather than directly within X’s website. These
allegations once again indicate that the Commission’s fine was a targeted, pretextual response to
X’s defense of free expression.
Under Article 39, platforms’ ad repositories must display seven different pieces of
information about each ad and users must be able to make “multicriteria queries” (e.g. search for
specific ads that fulfill multiple criteria input by the user).534 X’s ad repository allows these
“multicriteria queries”—users can create custom searches based on where and when the ad was
displayed and who the advertiser was.535 But the Commission, citing no clear textual
justification, said this was not enough—X had to allow “multicriteria queries” using all seven
pieces of information.536
Even more ridiculous, the Commission found X in violation of the DSA for producing
the results of ad repository searches in a spreadsheet (a .csv file), rather than on its website
directly.537 The Commission argued that this, too, “reduces the searchability” of X’s ad
530 Id. at 35.
531 Id.
532 See About X Premium, X H ELP CENTER , https://help.x.com/en/using-x/x-premium (last accessed Jan. 29, 2026).
533 X Decision, supra note 61, at 27, 33.
534 Digital Services Act, supra note 26, Art. 39.
535 X Decision, supra note 61, at 56-57.
536 Id.
537 Id. at 57.
129
repository because it requires users to download “third-party software” and therefore could
frustrate attempts to research advertisements on X.538 The Commission also argued that
producing results in a spreadsheet made X’s ad repository a “search tool” rather than a
“searchable tool,” and therefore brought it out of compliance with Article 39.539 These
arguments are weak. Researchers are capable of using spreadsheet programs, as manipulating
large data sets in spreadsheet software is an integral part of quantitative research in almost any
field. The Commission’s other argument—that X illegally provides a “search tool” rather than a
“searchable tool”540—is equally unpersuasive. It is unclear how these two things differ, how X’s
ad repository is a search tool but not a searchable tool, or how X would have known the
difference between these two things. Yet it was a key reason for X’s €35 million ad repository
fine.
The Commission fined X for having an ad repository that allowed some “multicriteria queries”
but not others and for producing data in an external spreadsheet.
iii. The Commission fined X €40 million for refusing to enforce the DSA’s
researcher access provisions extraterritorially.
Article 40 of the DSA requires platforms to give misinformation pseudoscientists “access
without undue delay to data.”541 In practice, platforms must grant access to their back-end
systems, allowing these so-called researchers to scrape data from the platforms en masse using
an application programming interface (API).542 Article 40 sets out conditions outlining who is
eligible for this coveted back-end access: researchers must be “independent from commercial
interests,” and the data must be “for the sole purpose of conducting research” related to the EU.
538 Id.
539 Id. at 75.
540 Id.
541 Digital Services Act, supra note 26, Art. 40.
542 See Victor Alamercery, Data Access under the EU Digital Services Act, E UROPEAN COMM’ N (July 9, 2025).
130
X assessed applications for API access in accordance with the limitations in Article 40—and it
was penalized for it. Moreover, the Commission claimed the power to require extraterritorial
action: under its interpretation of the DSA, X, an American company, must give American data
to researchers around the world.
First, the Commission faulted X for enforcing the statutory requirement that researchers
are entitled to access platforms’ proprietary data “for the sole purpose of conducting research
that contributes to the detection, identification and understanding of systemic risks in the
[European] Union.”543 X, reading the plain text of Article 40 in conjunction with the
presumption against extraterritorial application of statutes, generally denied data access requests
for which the scope of the proposed research went beyond the EU.544 The Commission said that
X was required to grant these requests under the DSA, even when the requested data had no clear
EU nexus, because “it is justified for a research project to also consider wider geographical
contexts.”545 The Commission acknowledged that the requirement for data to have an EU nexus
“is found in Article 40(12),” but still found X in violation of the DSA for taking an
“unnecessarily narrow” interpretation of the provision.546 The Commission’s broader reading
would require X to provide researchers with data having nothing to do with the EU—a clear
extraterritorial application of the DSA.
The European Commission fined X for enforcing Article 40’s plain-text requirements and failing
to hand over non-EU data.
543 Digital Services Act, supra note 26, Art. 40.
544 X Decision, supra note 61, at 91-93.
545 Id. at 133.
546 Id.
131
The European Commission also fined X for rejecting data access to researchers who are
“established outside the [European] Union.”547 On the Commission’s reading, the DSA requires
platforms to grant back-end data access to researchers around the world, no matter where they
are located.548 Paired with the expansion of the data access requirement to non-EU data, this
means that an American company could be required to give non-EU researchers access to
American data—all because of a European law. It is precisely the type of chilling, extraterritorial
application of a foreign censorship law that the Committee has warned about for more than a
year. And it sets a dangerous precedent: under the DSA, the Commission claims the power to
regulate beyond its borders.
The European Commission applies DSA data access requirements to researchers around the
world.
The Commission’s other reasons for finding X in violation of Article 40 were meritless.
For example, X enforced Article 40’s requirement that researchers are “independent of
commercial interests” by requiring prospective researchers to provide “information about the
organisation’s board members, membership, shareholders, grant recipients” and “‘indirect
funding.’”549 This information is plainly relevant to the determination that a researcher’s
organization is disentangled from corporations that stand to benefit from their research. Yet the
Commission found X in violation of the DSA for collecting simple, useful facts to enforce the
DSA as it was written.550
The European Commission faulted X for enforcing the requirement that researchers are
“independent of commercial interests.”
Similarly, the European Commission found X in violation for the default level of data
access it granted to successful research applicants. X, upon a successful research application,
granted researchers access to the API for 1 million tweets monthly, for a term of six months.551
Researchers were able to request longer or more robust access if their research required it.552
547 Id. at 39.
548 See id.
549 Id. at 95; see Digital Services Act, supra note 26, Art. 40.
550 X Decision, supra note 61, at 95.
551 Id. at 99-100.
552 Id. at 115.
132
Somehow, this was not enough for the Commission—even as it admitted that other platforms
“have even more restrictive quotas in place.”553
Finally, the European Commission’s fine of X sheds light on its definition of “systemic
risk” under the DSA. X argued that in the absence of guidance from the Commission, it did not
consider “misinformation . . . in and of itself” to be a “systemic risk in the EU.”554 The
Commission disagreed with X, apparently arguing that all misinformation constitutes a
“systemic risk” under DSA Article 34.555 This means that alleged misinformation must be
“mitigated” under Article 35, effectively requiring social media platforms to broadly censor
information deemed false by EU bureaucrats.
iv. X alleges that the European Commission engaged in significant
investigative misconduct.
In addition to its comprehensive rebuttal of the European Commission’s pretextual
allegations, X alleges that the Commission made significant procedural errors that inhibited its
right of defense.556 According to X, the Commission gave X’s lawyers only five days for an in
camera review of more than 3,800 confidential documents substantiating the Commission’s
allegations.557 During these five days, X’s lawyers were given computers that did not work,
faced “rigid entry and exit rules” for the review room, and remained under the constant watch of
Commission staff, effectively barring counsel from “freely discussing” their defense of X.558 The
Commission also was empowered to review counsel’s notes upon their departure from the read
room.559 Finally, X alleges that the Commission improperly used notes from informal interviews
and meetings, rather than sworn testimony, to substantiate its allegations.560
X alleged that the Commission systematically denied it the right to defend itself against the
Commission’s allegations.
553 Id. at 145.
554 Id. at 130.
555 Id.
556 Id. at 153-163.
557 Id. at 157, 159.
558 Id. at 153.
559 Id. at 160.
560 Id. at 154.
133
v. The European Commission is threatening to ban X if it does not give in to
censorship demands.
The European Commission’s €120 million fine is near the statutory maximum of six
percent of a platform’s global revenue.561 But that is not even the most severe punishment
potentially: the European Commission is threatening to ban X in the EU if it does not comply
with a list of censorship demands, including:
- “More prominently and directly providing clear and non-misleading information
about the meaning of the” blue checkmark (even though X operated a banner doing
this in the EU for four months in 2025); - Ending the algorithmic amplification of paying users;
- Providing ad repository search results on X’s website, rather than in an external
spreadsheet; - Granting data access requests to researchers located outside the EU;
- Giving more expansive data access to researchers; and
- Changing its terms of service to allow “qualified researchers” to scrape data from X’s
public website without prior approval.562
Banning one of the most popular social media popular platforms in the world would be a grave
assault on free expression. Yet the European Commission is threatening to do exactly that.563
The European Commission is threatening to ban X in the EU if it doesn’t comply with censorship
demands.
561 Id. at 168-179.
562 Id. at 163, 165-168.
563 Id. at 165.
134
The European Commission’s fine and threatened ban of X is a seminal moment in the
European campaign to control global online discourse. Since before the DSA’s enactment, the
Commission has targeted X for its pro-speech policies and advocacy. The Commission’s
decision is neither an impartial assessment of the facts nor a dispassionate reading of the law—it
is a 183-page pretext for fining X because it hosts speech that is politically inconvenient to the
European ruling class.
The weakness of the European Commission’s arguments makes this apparent.
Repeatedly, the Commission distorts the text of the DSA to manufacture violations that it can
hang on X. It ignores plain facts and simple law in a campaign to criminalize X’s embrace of
fundamental free speech principles. Along the way, it provides important insights into the
European Commission’s interpretation of the DSA. In many ways, it is consistent with the
European Commission’s regulatory approach in recent years: it assumes that EU citizens are
stupid, unable to discern and make free decisions about what is true and what is not. It penalizes
innovation, even explaining that it is fining X because the company tried to operate the platform
differently than Twitter’s previous leadership or other social media companies. And perhaps
most concerning, it shows that the EU seeks to enforce the DSA beyond its borders—a grave
threat to free speech in the United States.
B. The European Commission’s ongoing initiatives indicate that it remains committed
to censorship.
Since the DSA’s passage, the Commission, led by President Ursula von der Leyen, has
maintained its censorship efforts. On top of the already-labyrinthine European censorship
architecture, von der Leyen and her allies have proposed additional censorship laws and
initiatives creating even stricter requirements for removal of alleged misinformation and hate
speech. While she has pledged her second term as President to “simplification” of Europe’s
bloated regulatory regime,564 she has refused to extend that deregulatory agenda to online
speech.
i. The EU Democracy Shield will expand the European Commission’s
censorship powers and possibly end anonymity on social media.
President von der Leyen has made the so-called “EU Democracy Shield” a centerpiece of
her second presidential term. Unveiled in November 2025, the Democracy Shield proposes new
initiatives to censor disfavored online content.565 Where Europe should be correcting its mistakes
and reintroducing fundamental free speech principles, it is doubling down and expanding the
Commission’s capacity to silence its political opponents.
The Shield promises several new censorship initiatives by 2027. First, it states that “the
Commission will prepare a DSA incidents and crisis protocol.”566 These, like the DSA Election
Guidelines, will be additional DSA censorship requirements for times of “crisis.” The
564 See Jennifer Rankin, EU launches ‘simplification’ agenda in effort to keep up with US and China, T HE
G UARDIAN (Jan. 29, 2025).
565 Democracy Shield Proposal, supra note 66.
566 Id. at § 2.2.
135
Commission promised that this censorship protocol will “ensure swift reactions”—meaning
censorship—“to large-scale and potentially transnational” events.567 Vaguely defined, this
protocol will effectively be a censorship switch that EU authorities can flip on whenever they
sense that they are losing control of online narratives. And even if it is voluntary in name, it will
be binding in practice as prior “voluntary” EU regulatory arrangements, like the Disinformation
Code, have shown.
Similarly, the Democracy Shield promises an expansion of the Code of Conduct on
Disinformation. The plan states that the European Commission will “explore possible further
measures with the Code’s signatories,” zeroing in specifically on “labelling of AI-generated . . .
content” and “user verification tools,” including the “EU Digital Identity Wallets.”568 Of course,
the Disinformation Code’s commands are voluntary in name only—something the Democracy
Shield makes clear. The plan states that “the Commission will assess [platforms’] levels of
commitment . . . and . . . implementation” of the Code as part of its DSA supervision.569 The
upshot: the Commission is seeking to institute de facto binding obligations for platforms to
verify users’ identity, possibly ending anonymity on social media. Regulators seeking to harass
users for disfavored posts could simply compel platforms to produce anonymous users’ verified
identities. It would be perhaps the greatest threat to free speech yet.
The plan also promises to create two new EU censorship hubs: the European Center for
Democratic Resilience and the European Network of Fact-Checkers. The Center for Democratic
Resilience will “support operational cooperation and capacity building” for governments to
respond to alleged disinformation campaigns570—meaning it will coordinate rapid and large-
scale cross-border censorship. The European Network of Fact-Checkers will be yet another
coordination forum for left-wing, pro-censorship NGOs.571 Without any apparent irony, the
Commission states that the Fact-Checker Network will be “independent” but also “set up with
the Commission’s support.”572 Of course, Commission funding renders any level of
independence impossible—Network members will know that the Commission can turn off the
funding spigots at any time they cease to be politically useful. According to the Democracy
Shield plan, the Network will “create and maintain an independent repository for fact-checking
to consolidate fact-checks”—in other words, a database for the Commission’s party line.573 The
Network’s “repository,” operating as an arm of the Commission, will signal to platforms what
content can stay up and what must come down immediately, in addition to the fact-checkers’ pre-
existing DSA power to make priority censorship requests.
ii. The EU Equality Strategy recently called for legislation that would define
“hate speech” across the EU.
The European Commission’s recently released five-year “LGBTIQ+ Equality Strategy”
is equally censorious. This proposal calls to “harmonize” EU Member-State definitions of “hate
567 Id.
568 Id.
569 Id.
570 Id. at § 2.1.
571 Id. at § 2.2.
572 Id.
573 Id. (emphasis omitted).
136
offences committed online,” criminalizing speech across the EU.574 Harmonization would
require every EU Member State to use the EU’s aggressive and political definition for illegal
“hate speech,” which includes conventional political discourse and “memes.”575 Under the DSA,
platforms must censor “illegal hate speech,”576 meaning that this push would require more
aggressive digital censorship across Europe—and likely worldwide.
Even more telling, the Commission seeks to bypass the regular democratic process to do
it. To initiate the “harmoniz[ation]” process, “hate speech” must first be added to the list of “EU
crimes” under the EU’s governing treaty.577 This step, which would require a vote by EU
legislative bodies, has repeatedly failed.578 Now, the Commission is forging ahead on its own,
arguing that “hate speech” is already a crime under the treaty because it falls under the umbrella
of existing offenses like “terrorism” and “computer crime.”579
iii. EU President von der Leyen’s regulatory “simplification” package did
not include meaningful DSA reforms.
President von der Leyen has pledged her second term to regulatory “simplification.”580
This “simplification” does not appear to include the EU’s byzantine digital speech regulations.
von der Leyen recently released a “digital omnibus regulation proposal” in order to “optimize the
application of the digital rulebook.”581 Yet her proposal would leave Europe’s digital censorship
architecture, including the DSA, untouched.582 The Commission appears to remain as committed
as ever to global internet censorship.
iv. Other EU laws regulating tech companies provide additional ways for the
Commission to apply pressure on platforms.
The European Commission also imposes a severe regulatory burden on technology
companies through its Digital Markets Act (DMA), an EU competition law. The DMA, the
DSA’s sister legislation, imposes strict requirements on the design of internet services for large
platforms known as “gatekeepers.”583 The DMA’s qualitative standards for designating
gatekeepers can easily be abused and the Commission has used them to target American
574 Union of Equality: LGBTIQ+ Equality Strategy 2026-2030, E UROPEAN COMM’ N, COM(2025) 725 final at 6.
575 DSA Censorship Report I, supra note 3, at 27.
576 Digital Services Act, supra note 26, Art. 35.
577 Union of Equality: LGBTIQ+ Equality Strategy 2026-2030, E UROPEAN COMM’ N, COM(2025) 725 final at 6
(emphasis omitted).
578 Id.
579 Id.; see T REATY ON THE FUNCTIONING OF THE E UROPEAN UNION, Art. 83(1).
580 See Simplification and Implementation, European Comm’n, https://commission.europa.eu/law/law-making-
process/better-regulation/simplification-and-implementation_en (last accessed Jan. 9, 2026).
581 Digital Omnibus Regulation Proposal, E UROPEAN COMM ’N (Nov. 19, 2025).
582 Mark MacCarthy & Kenneth Propp, The European Union Changes Course on Digital Legislation, L AWFARE
(Dec. 15, 2025) (“Conspicuously, none of the changes would affect the Digital Markets Act (DMA) or Digital
Services Act (DSA).”)
583 The Digital Markets Act: Ensuring Fair and Open Digital Markets, E UROPEAN COMM ’ N,
https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-markets-act-
ensuring-fair-and-open-digital-markets_en (last visited Jan. 29, 2026).
137
companies. For example, Apple and Meta were fined in 2025 a collective €700 million for
alleged non-compliance with the DMA.584
Most concerningly, senior European officials have explicitly referenced the DMA and the
requirements on gatekeepers when describing how the European Commission can require
platforms to take certain measures to combat election misinformation.585
C. The European Commission seeks to export its censorship measures to other
countries.
In many ways, the Commission is leading a global effort for strict digital censorship laws.
Copycat bills have emerged in Australia, South Korea, and elsewhere around the world as
censorious foreign officials have taken to U.S. universities to plan their global censorship
regime.
Perhaps the most notable foreign attempt to imitate the Digital Services Act has been the
United Kingdom’s (UK) Online Safety Act (OSA). Passed in 2023, just a year after DSA, the
OSA paves the way for Ofcom, the UK’s relevant regulatory authority, to regulate how social
media platforms “should deal with [so-called] disinformation and misinformation.”586 The
Committee’s oversight has previously shown that British regulators sought to censor legitimate
political speech criticizing the government, including “narratives” about a “two-tier” system of
justice in the UK, during large-scale riots in August 2024.587 British regulators have also used
the OSA to threaten American platforms with regulatory retaliation if they do not “embed
584 Press Release, European Comm’n, Commission finds Apple and Meta in breach of the Digital Markets Act
(Apr. 22, 2025), https://ec.europa.eu/commission/presscorner/detail/en/ip_25_1085.
585 Readout of “Protecting The 2024 Elections: From Alarm to Action” (March 8, 2024), see Ex. 244.
586 Online Safety Act 2023, c. 50, Ch. 7 § 152 (UK).
587 Rep. Jim Jordan (@Jim_Jordan), X (July 29, 2025, 9:30 PM),
https://x.com/Jim_Jordan/status/1950368307372020086.
Senior European Commission official referred to the DMA when discussing tools at the
European Commission’s disposal ahead of elections.
138
[British] standards” on topics like “hate” speech into their content moderation policies.588 Now,
Prime Minister Keir Starmer, like European regulators, is threatening to take X offline in the UK
using the OSA’s authorities.589
The global copycat campaign has not been confined to Europe, either. In Brazil, a
proposed “fake news bill” explicitly drew inspiration from the DSA, with “twenty-five citations”
to the European law.590 While the “fake news bill” has not yet been adopted, Brazil’s Supreme
Court has led an aggressive, yearslong internet censorship campaign, issuing global removal
orders and even banning X for multiple months in 2024.591 In India, analysts have also drawn
comparisons between the proposed Digital India Act, which would subject platforms to “new
regulations with a heavy focus on fact-checking to prevent misinformation,”592 and the DSA.593
Beyond Brazil and India, the hallmarks of the DSA can be found in new digital laws in Australia,
Malaysia, and South Korea.594 Countries like South Korea and Brazil are also seeking to imitate
the DSA’s sister competition legislation, the Digital Markets Act, which imposes onerous,
innovation-killing requirements on American tech companies.595
In fact, the world’s censors have even gathered on American soil to compare notes. On
September 24, 2025, Stanford University held a roundtable event titled “Compliance and
Enforcement in a Rapidly Evolving Landscape.”596 This roundtable brought together foreign
officials who have architected the burgeoning global censorship regime and directly targeted
American speech.597 The keynote speaker at this event was Julie Inman-Grant, the Australian
eSafety Commissioner who has explicitly argued that governments have the authority to demand
and enforce global takedowns of content.598 Other attendees and panelists included officials from
some of the entities with the worst track records of extraterritorial censorship, including the the
588 Rep. Jim Jordan (@Jim_Jordan), X (July 28, 2025, 10:58 AM),
https://x.com/Jim_Jordan/status/1949846809238446237.
589 Matthew Field et al., Musk’s X could be banned in Britain over AI chatbot row, T HE T ELEGRAPH (Jan. 8, 2026).
590 Thales Bueno & Renan Canaan, The Brussels Effect in Brazil: Analysing the impact of the EU Digital Services
Act on the discussion surrounding the Fake News Bill, 48 T ELECOMMS. POLICY 102757 (2024).
591 STAFF OF THE H. COMM . ON THE JUDICIARY AND THE SELECT SUBCOMM. ON THE WEAPONIZATION OF THE FED.
G OV’ T OF THE H. COMM . ON THE JUDICIARY, 118 TH CONG., T HE A TTACK ON FREE SPEECH A BROAD AND THE BIDEN
A DMINISTRATION’ S S ILENCE : T HE CASE OF BRAZIL (Comm. Print Apr. 17, 2024); Julia Dias Carneiro & Juana
Summers, Brazil’s Supreme Court bans X, NPR (Sept. 2, 2024).
592 Kyle Chin & Kaushik Sen, What is the Digital India Act? India’s Newest Digital Law, U P G UARD (Jan. 7, 2025).
593 Aahil Sheikh, Transparency Must be a Cornerstone of the Digital India Act, T ECH POLICY PRESS (Apr. 23,
2024).
594 Richard Sharpe et al., The Global Content Regulation Landscape—Developments in the EU, UK, U.S., and
Beyond, K ING & SPALDING (July 10, 2025).
595 See Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Mr. Alexandre Rebelo Ferreira,
Secretariat for Economic Reforms, Brazil Ministry of Finance (Dec. 30, 2025); Letter from Rep. Jim Jordan,
Chairman, H. Comm. on the Judiciary, to Mr. Han Ki-jeong, Chairman, Korea Fair Trade Commission (July 24,
2025).
596 Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Dr. Jeff Hancock, Director, Stanford
Cyber Policy Center (Oct. 22, 2025); see Teddy Ganea et al., Stanford’s Cyber Policy Center Coordinates
International Internet Censorship, T HE STANFORD REV. (Oct. 29, 2025).
597 Id.
598 Id.; see Tom Crowley, ‚Silly‘ to demand global takedowns: Dutton weighs in on eSafety case, A USTRALIAN
BROADCASTING CORP. (Apr. 25, 2024).
139
UK, the EU, and Brazil.599 These international coordination forums, where global censors share
legislative ideas and best censorship practices, represent an acute threat to American speech.
The European censorship threat shows no signs of abating. In December 2025, the
European Commission took its most aggressive censorship step to date, fining X nearly six
percent of its worldwide revenue in obvious retaliation for its protection of free speech around
the globe. The European Commission’s new legislative and regulatory proposals likewise
indicate that it is doubling down on its agenda of global online thought control. President von der
Leyen’s so-called Democracy Shield would add new layers on top of the existing European
censorship architecture, potentially ending anonymity on social media. Likewise, the European
Commission is trying to circumvent democratic processes to strong-arm every EU country into
adopting its expansive definition of “hate speech,” which would trigger additional censorship
obligations pursuant to the DSA. And Europe is trying to export these wrongheaded laws and
initiatives—sometimes even here in the United States.
VII. CONCLUSION
The Committee will continue its investigation into foreign censorship laws, regulations,
and judicial orders because of the risk they pose to American speech in the United States. The
European Commission’s extraterritorial actions under the Digital Services Act directly infringe
on American sovereignty and directly harm American free speech in the United States. The
Committee will continue to conduct oversight to inform legislative solutions that defend against
and effectively counter this existential risk to a fundamental American right: the right to free
expression.
599 Letter from Rep. Jim Jordan, Chairman, H. Comm. on the Judiciary, to Dr. Jeff Hancock, Director, Stanford
Cyber Policy Center (Oct. 22, 2025); see Teddy Ganea et al., Stanford’s Cyber Policy Center Coordinates
International Internet Censorship, T HE STANFORD REV. (Oct. 29, 2025).
140
APPENDIX
Table of Contents
Section I: Internal Platform Exhibits 161
Exhibit 1: TikTok’s September and October 2021 Report, EU Code of Practice on
Disinformation/COVID-19 (Nov. 1, 2021) 162
Exhibit 2: Internal emails among Google staff (June 22, 2023) 184
Exhibit 3: TikTok Slide Deck: Digital Services Act, Readiness overview for the European
Commission (July 17, 2023) 190
Exhibit 4: Letter from TikTok to European Commission (Aug. 11, 2023) 370
Exhibit 5: TikTok DSA Risk Assessment Guidelines (Aug. 25, 2023) 373
Exhibit 6: TikTok Response to Commission RFI (Nov. 4, 2023) 395
Exhibit 7: TikTok Response to Commission RFI (Nov. 17, 2023) 435
Exhibit 8: TikTok Community Guidelines Update Executive Summary (Mar. 20, 2024)
457
Exhibit 9: Letter from TikTok to European Commission (Apr. 5, 2024) 468
Exhibit 10: Readout of meeting between TikTok staff and European Commission Vice
President Vera Jourova (May 28, 2024) 495
Exhibit 11: Disinformation Code Subscription Document for TikTok (Jan. 17, 2025) 499
Exhibit 12: TikTok White Paper: The Digital Services Act 537
Exhibit 13: TikTok Suggested Update Tracking Spreadsheet 544
Exhibit 14: TikTok Suggested Update Tracking Spreadsheet 546
Exhibit 15: TikTok Community Guidelines Survey 548
Exhibit 16: Policy Accuracy and Transparency & Community Guidelines Accessibility
552
Exhibit 17: TikTok Community Guideline Update Slide Deck 554
141
Section II: External Platform Exhibits 563
Exhibit 18: Emails between Spotify staff and European Commission staff (June 7, 2020)
564
Exhibit 19: Readout of meeting between TikTok and European Commission Vice President
Vera Jourova (Apr. 20, 2021) 572
Exhibit 20: Readout of meeting between TikTok staff and Cabinet of Commissioner Theirry
Breton (Sep. 30, 2021) 574
Exhibit 21: Emails between TikTok staff and European Commission staff (Mar. 14, 2022)
576
Exhibit 22: Readout of meeting between Spotify and European Commission Vice President
Vera Jourova (Sep. 7, 2022) 579
Exhibit 23: Emails between TikTok staff and European Commission staff (Nov. 9, 2022)
583
Exhibit 24: Emails between TikTok staff and European Commission staff (Jan. 4, 2023)
594
Exhibit 25: Readout of meeting between TikTok and European Commission Vice President
Vera Jourova (Jan. 10, 2023) 613
Exhibit 26: Email from YouTube staff to European Commission staff (Mar. 19, 2024)
616
Exhibit 27: Emails between TikTok staff and European Commission staff (May 28, 2024)
618
Section III: EU Internet Forum Exhibits 647
Exhibit 28: Agenda for EU Internet Forum Ministerial Meeting (Dec. 8, 2021) 648
Exhibit 29: Emails between TikTok staff and European Commission staff (Mar. 24, 2022)
651
Exhibit 30: European Commission Slide Deck: Digital Services Act & Algorithmic
amplification (Sep. 29, 2022) 655
Exhibit 31: Institute for Strategic Dialogue Slide Deck: Algorithmic Amplification,
Borderline Content and Manipulative Techniques (Sep. 29, 2022) 672
142
Exhibit 32: Email from European Commission staff to EU Internet Forum participants (Oct.
11, 2022) 683
Exhibit 33: Agenda for EU Internet Forum Ministerial Meeting (Dec. 7, 2022) 688
Exhibit 34: Email from European Commission staff to EU Internet Forum participants (Jan.
9, 2023) 691
Exhibit 35: Emails between TikTok staff and European Commission staff (Sep. 2, 2024)
697
Exhibit 36: Emails between TikTok staff and European Commission staff (Oct. 15, 2024)
702
Exhibit 37: EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic
Amplification to Spread Terrorist, Violent Extremist and Borderline Content 710
Exhibit 38: EU Internet Forum: The Handbook of Borderline Content in Relation to Violent
Extremism 830
Exhibit 39: TrustLab Slide Deck: Study on the Role and Effects of the Use of Algorithmic
Amplification to Spread Terrorist, Violent Extremist and Borderline Content across leading
Social Media sites in Europe 903
Exhibit 40: EU Internet Forum: Study on the Role and Effects of the Use of Algorithmic
Amplification to Spread Terrorist, Violent Extremist and Borderline Content 943
Exhibit 41: TikTok response to EU Internet Forum Questionnaire – Algorithmic
Amplification 1063
Section IV: Hate Speech Code Exhibits 1070
Exhibit 42: European Commission Slide Deck: The ninth EU High level group on
combatting racism, xenophobia and other forms of intolerance (July 7, 2021) 1071
Exhibit 43: Draft Code of Conduct (+) on Countering Illegal Hate Speech Online (Mar. 18,
2023) 1202
Exhibit 44: European Commission Question and answers on the Code revision (Mar. 27,
2023) 1207
Exhibit 45: Draft Code of Conduct (+) on Countering Illegal Hate Speech Online 1209
143
Section V: Disinformation Code Exhibits 1214
Exhibit 46: Email from European Commission staff to TikTok (Apr. 1, 2020) 1215
Exhibit 47: Letter from European Commission to TikTok (July 22, 2020) 1217
Exhibit 48: Emails between TikTok staff and European Commission staff (Oct. 30, 2020)
1222
Exhibit 49: TikTok Input to European Commission Request on Covid-19 Vaccination
Disinformation (Nov. 4, 2020) 1226
Exhibit 50: Emails between TikTok staff and European Commission staff (Nov. 9, 2020)
1231
Exhibit 51: Readout of meeting between TikTok and Staff to European Commission Vice
President Vera Jourova (Nov. 6, 2020) 1237
Exhibit 52: Emails between TikTok staff and European Commission staff (Jan. 19, 2021)
1239
Exhibit 53: Readout of meeting between European Commission Vice President Vera
Jourova and multiple platforms (Feb. 22, 2021) 1248
Exhibit 54: Emails between TikTok staff and European Commission staff (Apr. 13, 2021)
1251
Exhibit 55: Readout of meeting between TikTok and European Commission Vice President
Vera Jourova (Apr. 20, 2021) 1275
Exhibit 56: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Oct. 14, 2021) 1278
Exhibit 57: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Oct. 14, 2021) 1282
Exhibit 58: Emails between TikTok staff and European Commission staff (Nov. 5, 2021)
1286
Exhibit 59: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Dec. 8, 2021) 1290
Exhibit 60: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Jan. 27, 2022) 1295
144
Exhibit 61: New Code of Practice on Disinformation: Minutes of the 4th General Assembly
of the Signatories (Jan. 28, 2022) 1297
Exhibit 62: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 7, 2022) 1300
Exhibit 63: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 16, 2022) 1304
Exhibit 64: Letter from Commissioner Thierry Breton and Vice President Vera Jourova to
TikTok (Mar. 3, 2022) 1308
Exhibit 65: Emails between TikTok staff and European Commission staff (Mar. 7, 2022)
1311
Exhibit 66: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Mar. 10, 2022) 1315
Exhibit 67: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Mar. 30, 2022) 1318
Exhibit 68: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Apr. 4, 2022) 1320
Exhibit 69: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Apr. 19, 2022) 1324
Exhibit 70: New Code of Practice on Disinformation: Minutes of the negotiation session
(Apr. 19, 2022) 1328
Exhibit 71: Meeting Agenda from European Commission staff to Code of Practice on
Disinformation Signatories (Apr. 27, 2022) 1331
Exhibit 72: Meeting Agenda from European Commission staff to Code of Practice on
Disinformation Signatories (May 4, 2022) 1333
Exhibit 73: Meeting Agenda from European Commission staff to Code of Practice on
Disinformation Signatories (May 11, 2022) 1335
Exhibit 74: Meeting Agenda from European Commission staff to Code of Practice on
Disinformation Signatories (May 18, 2022) 1337
Exhibit 75: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 24, 2022) 1339
145
Exhibit 76: Meeting Agenda from European Commission staff to Code of Practice on
Disinformation Signatories (June 1, 2022) 1343
Exhibit 77: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 3, 2022) 1345
Exhibit 78: Meeting Agenda from European Commission staff to Code of Practice on
Disinformation Signatories (June 23, 2022) 1349
Exhibit 79: Email from European Commission staff to Code of Practice on Disinformation
Signatories (June 27, 2022) 1351
Exhibit 80: Agenda: Second Meeting of the Code of Practice’s Permanent Task-Force (July
5, 2022) 1354
Exhibit 81: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Aug. 10, 2022) 1356
Exhibit 82: Meeting invitation from European Commission Staff to Code of Practice on
Disinformation Signatories (Aug. 24, 2022) 1358
Exhibit 83: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Sep. 1, 2022) 1361
Exhibit 84: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 2, 2022) 1363
Exhibit 85: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 14, 2022) 1366
Exhibit 86: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 19, 2022) 1370
Exhibit 87: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Sep. 20, 2022) 1373
Exhibit 88: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Oct. 4, 2022) 1375
Exhibit 89: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 19, 2022) 1378
Exhibit 90: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Nov. 1, 2022) 1381
146
Exhibit 91: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Nov. 15, 2022) 1384
Exhibit 92: Crisis Response Subgroup: Written input on planned actions to reduce Ukraine
related disinformation in Central and Eastern Europe (Nov. 28, 2022) 1388
Exhibit 93: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Nov. 30, 2022) 1417
Exhibit 94: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Dec. 2, 2022) 1420
Exhibit 95: Agenda: Fourth Meeting of the Code of Practice’s Permanent Task-Force (Dec.
6, 2022) 1423
Exhibit 96: Emails between European Commission staff and Code of Practice on
Disinformation Signatories (Dec. 8, 2022) 1426
Exhibit 97: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Dec. 8, 2022) 1432
Exhibit 98: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Dec. 9, 2022) 1435
Exhibit 99: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Dec. 13, 2022) 1440
Exhibit 100: Emails between TikTok staff and European Commission staff (Dec. 14, 2022)
1443
Exhibit 101: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Dec. 14, 2022) 1448
Exhibit 102: Letter from Commissioner Thierry Breton and Vice President Vera Jourova to
TikTok (Dec. 22, 2022) 1452
Exhibit 103: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Jan. 11, 2023) 1455
Exhibit 104: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Jan. 30, 2023) 1458
Exhibit 105: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 13, 2023) 1461
147
Exhibit 106: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 30, 2023) 1464
Exhibit 107: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 8, 2023) 1467
Exhibit 108: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 27, 2023) 1470
Exhibit 109: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 16, 2023) 1473
Exhibit 110: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 22, 2023) 1476
Exhibit 111: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Mar. 8, 2023) 1479
Exhibit 112: Disinformation Code Subscription Document for Signatory ActiveFence (Mar.
22, 2023) 1482
Exhibit 113: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Mar. 24, 2023) 1488
Exhibit 114: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Mar. 27, 2023) 1491
Exhibit 115: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Mar. 30, 2023) 1496
Exhibit 116: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Apr. 19, 2023) 1499
Exhibit 117: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Apr. 26, 2023) 1502
Exhibit 118: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 10, 2023) 1505
Exhibit 119: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 5, 2023) 1508
Exhibit 120: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 16, 2023) 1511
148
Exhibit 121: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 16, 2023) 1514
Exhibit 122: Agenda: Fifth plenary meeting of the Code of Practice’s permanent Task-Force
(June 5, 2023) 1517
Exhibit 123: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 6, 2023) 1520
Exhibit 124: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 13, 2023) 1524
Exhibit 125: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 23, 2023) 1527
Exhibit 126: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 28, 2023) 1531
Exhibit 127: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (July 3, 2023) 1534
Exhibit 128: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (July 13, 2023) 1537
Exhibit 129: Email from European Commission staff to Code of Practice on Disinformation
Signatories (July 25, 2023) 1540
Exhibit 130: Slovak Interior Ministry, Analysis of harmful content on Slovak language
TikTok (Sep. 2023) 1542
Exhibit 131: Emails among European Commission staff and Code of Practice on
Disinformation Signatories (July 20, 2023) 1545
Exhibit 132: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Sep. 6, 2023) 1551
Exhibit 133: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 6, 2023) 1554
Exhibit 134: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 2, 2023) 1560
Exhibit 135: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 22, 2023) 1563
149
Exhibit 136: Email from TikTok staff to European Commission staff (Sep. 26, 2023)1566
Exhibit 137: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 2, 2023) 1568
Exhibit 138: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 3, 2023) 1571
Exhibit 139: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 3, 2023) 1575
Exhibit 140: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 9, 2023) 1578
Exhibit 141: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 13, 2023) 1581
Exhibit 142: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 18, 2023) 1584
Exhibit 143: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 27, 2023) 1587
Exhibit 144: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Nov. 6, 2023) 1590
Exhibit 145: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Nov. 6, 2023) 1593
Exhibit 146: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Nov. 17, 2023) 1596
Exhibit 147: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Nov. 29, 2023) 1599
Exhibit 148: Emails between European Commission staff and Code of Practice on
Disinformation Signatories (Dec. 5, 2023) 1603
Exhibit 149: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Dec. 18, 2023) 1609
Exhibit 150: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Jan. 24, 2024) 1611
150
Exhibit 151: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Jan. 24, 2024) 1615
Exhibit 152: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Feb. 22, 2024) 1619
Exhibit 153: Meeting invitation from European Commission Staff to Code of Practice on
Disinformation Signatories (Feb. 22, 2024) 1621
Exhibit 154: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Mar. 1, 2024) 1623
Exhibit 155: Email from Google staff to European Commission staff (Mar. 19, 2024) 1626
Exhibit 156: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Apr. 17, 2024) 1628
Exhibit 157: Emails between European Commission staff and Code of Practice on
Disinformation Signatories (May 2, 2024) 1631
Exhibit 158: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 15, 2024) 1634
Exhibit 159: Emails between European Commission staff and Code of Practice on
Disinformation Signatories (May 23, 2024) 1637
Exhibit 160: Emails between European Commission staff and TikTok staff (May 23, 2024)
1652
Exhibit 161: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (May 23, 2024) 1662
Exhibit 162: Emails between European Commission staff and some Code of Practice on
Disinformation Signatories (May 29, 2024) 1672
Exhibit 163: Emails between European Commission staff and some Code of Practice on
Disinformation Signatories (May 30, 2024) 1692
Exhibit 164: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 6, 2024) 1711
Exhibit 165: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 12, 2024) 1714
151
Exhibit 166: Email from Meta staff to European Commission staff (July 10, 2024) 1717
Exhibit 167: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (July 10, 2024) 1722
Exhibit 168: Email from European Commission staff to Code of Practice on Disinformation
Signatories (July 17, 2024) 1725
Exhibit 169: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Aug. 7, 2024) 1728
Exhibit 170: Email from European Commission staff to Code of Practice on Disinformation
Signatories (Sep. 4, 2024) 1731
Exhibit 171: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 4, 2024) 1733
Exhibit 172: Emails between European Commission staff and Google staff (Sep. 9, 2024)
1736
Exhibit 173: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Sep. 16, 2024) 1738
Exhibit 174: Summary of the Roundtable on Auditing Commitments of the Code of Practice
on Disinformation (Sep. 30, 2024) 1745
Exhibit 175: Internal emails among Google staff (Oct. 2, 2024) 1751
Exhibit 176: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Oct. 2, 2024) 1756
Exhibit 177: Internal emails among Google staff (Oct. 7, 2024) 1759
Exhibit 178: Emails between Google staff and European third parties (Oct. 8, 2024) 1768
Exhibit 179: Event agenda: Disinformation: how new communication technologies are
influencing elections and the information landscape (Oct. 10, 2024) 1776
Exhibit 180: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Dec. 13, 2024) 1779
Exhibit 181: Emails among TikTok, the European Commission, and third parties (Dec. 12,
2024) 1783
152
Exhibit 182: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Dec. 16, 2024) 1786
Exhibit 183: Emails among TikTok, the European Commission, and third parties (Dec. 19,
2024) 1791
Exhibit 184: Emails among TikTok, the European Commission, and third parties (Dec. 19,
2024) 1793
Exhibit 185: Emails among TikTok, the European Commission, and third parties (Dec. 20,
2024) 1796
Exhibit 186: Emails among TikTok, the European Commission, and third parties (Dec. 20,
2024) 1799
Exhibit 187: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (Dec. 16, 2024) 1802
Exhibit 188: New SLI on demonitisation efforts, capturing the financial value (Euros) of
actions taken 1810
Exhibit 189: Agenda: First Meeting of the Permanent Task-Force Crisis Response Subgroup
(Aug. 10, 2022) 1815
Exhibit 190: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Sep.
21, 2022) 1817
Exhibit 191: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Sep.
7, 2022) 1819
Exhibit 192: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup
(Apr. 19, 2023) 1821
Exhibit 193: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Feb.
22, 2023) 1823
Exhibit 194: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Feb.
8, 2023) 1825
Exhibit 195: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Jan.
11, 2023) 1827
153
Exhibit 196: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup
(Dec. 14, 2022) 1829
Exhibit 197: Draft EU Code of Practice on Disinformation 1831
Exhibit 198: List of problematic accounts on Slovak TikTok 1837
Exhibit 199: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup
(Mar. 8, 2023) 1880
Exhibit 200: Draft form for Reporting on the service’s response during the of the
[COVID/Ukraine] crisis 1882
Exhibit 201: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup
(Nov. 30, 2022) 1897
Exhibit 202: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Oct.
19, 2022) 1899
Exhibit 203: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup (Oct.
5, 2022) 1901
Section VI: DSA Workshop Exhibits 1903
Exhibit 204: Agenda for DSA multi-stakeholder workshop on systemic risks (May 7, 2025)
1904
Exhibit 205: Detailed Agenda: DSA Multi-Stakeholder Workshop on Systemic Risks and
their Mitigation (May 7, 2025) 1907
Exhibit 206: Agenda: Meeting of the Permanent Task-Force Crisis Response Subgroup
(Mar. 8, 2023) 1927
Exhibit 207: Protection of Your Personal Data (May 7, 2025) 1932
Exhibit 208: Invitation to the DSA Multi-stakeholder workshop on Systemic Risks (Apr. 25,
2025) 1938
Exhibit 209: Readout of DSA Risk Assessment Roundtable (July 2, 2025) 1944
Exhibit 210: Email thanking participants for attending the DSA Multi-Stakeholder
Workshop on Systemic Risks (May 15, 2025) 1949
154
Section VII: Elections Exhibits 1952
Exhibit 211: TikTok internal Slovak Election Summary Report 2023 1953
Exhibit 212: Meeting invitation from European Commission staff to the members of the
Working Group on Elections (Apr. 27, 2023) 1961
Exhibit 213: Meeting agenda from European Commission staff to members of the Working
Group on Elections (June 22, 2023) 1964
Exhibit 214: Slovak Council for Media Services Slide Deck: Snap Elections in Slovakia
(July 2023) 1966
Exhibit 215: Meeting invitation from European Commission staff to the members of the
Working Group on Elections (July 18, 2023) 1975
Exhibit 216: Meeting invitation from European Commission staff to the members of the
Working Group on Elections (Sep. 5, 2023) 1978
Exhibit 217: Meeting invitation from European Commission staff to the members of the
Working Group on Elections (Sep. 15, 2023) 1981
Exhibit 218: Letter from TikTok to the European Commission (Sep. 2023) 1984
Exhibit 219: Emails from European Commission staff to members of the Working Group on
Elections (Sep. 1, 2023) 1989
Exhibit 220: Emails from European Commission staff to members of the Working Group on
Elections (Sep. 1, 2023) 1993
Exhibit 221: Letter from the European Commission to TikTok (Sep. 5, 2023) 1997
Exhibit 222: TikTok internal Content Moderation Guidelines for 2023 Polish Election (Sep.
10, 2023) 2001
Exhibit 223: Meeting invitation from European Commission staff to the members of the
Working Group on Elections (Sep. 20, 2023) 2020
Exhibit 224: TikTok internal Content Moderation Guidelines for 2023 Slovak Election (Sep.
22, 2023) 2022
Exhibit 225: Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Sep. 25, 2023) 2069
Exhibit 226: Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Oct. 6, 2023) 2072
155
Exhibit 227: Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Oct. 13, 2023) 2075
Exhibit 228: Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Oct. 23, 2023) 2078
Exhibit 229: Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Oct. 27, 2023) 2081
Exhibit 230: Emails between TikTok staff and European Commission staff (Nov. 6, 2023)
2084
Exhibit 231: Non-Paper: TikTok’s approach to election preparedness across the EU (Nov.
8, 2023) 2089
Exhibit 232: Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Nov. 14, 2023) 2100
Exhibit 233: Meeting invitation from European Commission staff to the members of the
Working Group on Elections (Nov. 30, 2023) 2103
Exhibit 234: Meeting invitation from European Commission staff to the Crisis/Elections
Steering Committee (Nov. 30, 2023) 2106
Exhibit 235: Meeting invitation from European Commission staff to the members of the
Working Group on Elections (Dec. 20, 2023) 2109
Exhibit 236: Romanian Information Service: Note No. 2 to the Romanian Supreme Council
for National Defense (2024) 2112
Exhibit 237: Romanian Information Service: Note No. 1 to the Romanian Supreme Council
for National Defense (2024) 2116
Exhibit 238: Romanian Ministry of Internal Affairs: Information Note (2024) 2122
Exhibit 239: Emails between European Commission staff and Google staff regarding the
Brussels to the Bay Event (Jan. 31, 2024) 2126
Exhibit 240: Emails between Google staff and European Commission staff (Feb. 1, 2024)
2130
Exhibit 241: Internal emails among Meta staff (Feb. 26, 2024) 2135
Exhibit 242: Protecting the 2024 Elections: From Alarm to Action (Feb. 29, 2024) 2138
156
Exhibit 243: Internal Meta readout of Roundtable on DSA Elections Guidelines (Mar. 1,
2024) 2141
Exhibit 244: Readout of „Protecting The 2024 Elections: From Alarm to Action“ (Mar. 8,
2024) 2147
Exhibit 245: Minutes of Election Readiness DSA Roundtables (June 4, 2024) 2157
Exhibit 246: Meeting invitation from European Commission staff to Code of Practice on
Disinformation Signatories (June 21, 2024) 2160
Exhibit 247: Agenda for a European Commission meeting on the French parliamentary
elections (June 21, 2024) 2165
Exhibit 248: Emails from European Commission staff to members of the Code of Practice
Task Force (June 24, 2024) 2168
Exhibit 249: Invitation from the European Commission to a roundtable on election readiness
(July 3, 2024) 2175
Exhibit 250: Readout from the third European Commission roundtable on parliamentary
elections (July 10, 2024) 2178
Exhibit 251: Agenda for the 11th Meeting of the EU Support Hub for International Security
and Border Management in Moldova on “Countering Foreign Information Manipulation and
Interference” (Sep. 18, 2024) 2181
Exhibit 252: Emails from European Commission staff to platforms (Sep. 24, 2024) 2188
Exhibit 253: TikTok 2024 European Parliament Elections Confidential Report (Sep. 24,
2024) 2191
Exhibit 254: Agenda: Eighth Plenary Meeting of the Code of Practice’s Permanent Task-
Force (Oct. 1, 2024) 2206
Exhibit 255: Emails between European Commission staff and TikTok staff (Oct. 23, 2024)
2210
Exhibit 256: Emails between Meta staff and Irish regulators (Nov. 7, 2024) 2213
Exhibit 257: Emails between Meta staff and Irish regulators (Nov. 18, 2024) 2218
157
Exhibit 258: Invitation to Election Roundtable on Romanian Elections (Nov. 28, 2024)
2224
Exhibit 259: TikTok slide deck: Romanian Elections Platform Integrity Briefing (Nov. 28,
2024) 2226
Exhibit 260: External Information Service Note: Analysis of national security risks
generated by the actions of state and non-state cyber actors on IT&C infrastructures, support
for the electoral process (Nov. 28, 2024) 2236
Exhibit 261: Emails from the European Commission to TikTok and Romanian regulators
(Nov. 28, 2024) 2240
Exhibit 262: Discussion Questions for Election Roundtable on Romanian Elections (Nov.
29, 2024) 2247
Exhibit 263: Emails between European Commission staff and platforms (Nov. 29, 2024)
2249
Exhibit 264: Emails between Romanian regulators, European Commission staff, and TikTok
staff (Nov. 28, 2024) 2254
Exhibit 265: Letter from TikTok to the European Commission (Nov. 29, 2024) 2264
Exhibit 266: TikTok Response to Commission RFI (Dec. 7, 2024) 2278
Exhibit 267: TikTok Report on Romanian content (Dec. 12, 2024) 2287
Exhibit 268: TikTok Response to Commission RFI (Dec. 13, 2024) 2289
Exhibit 269: Emails among TikTok, the European Commission, and third parties (Dec. 17,
2024) 2324
Exhibit 270: Emails among TikTok, the European Commission, and third parties (Dec. 20,
2024) 2332
Exhibit 271: Emails between European Commission staff and TikTok staff (Feb. 3, 2025)
2335
Exhibit 272: Emails between European Commission staff and TikTok staff (Feb. 12, 2025)
2338
Exhibit 273: Internal email among Meta staff (Feb. 24, 2025) 2348
158
Exhibit 274: Draft agenda for meeting between TikTok and Romanian regulators (Feb. 27,
2025) 2350
Exhibit 275: Agenda for Roundtable – 2025 Romanian Presidential Elections (Mar. 3, 2025)
2352
Exhibit 276: Email from Dutch regulators to platforms (Sep. 3, 2025) 2354
Exhibit 277: Agenda for Roundtable on Elections in the Context of the Digital Services Act
(Sep. 15, 2025) 2358
Exhibit 278: Readout of NL Elections Roundtable (Sep. 15, 2025) 2360
Exhibit 279: Readout of Coimisiun na Mean Irish Presidential Election Roundtable (Sep. 24,
2025) 2364
Exhibit 280: Readout of DSA Presidential Election Roundtable (Sep. 24, 2025) 2377
Exhibit 281: Coimisiun na Mean Questionnaire on DSA Risk Assessment for Irish Elections
2381
Exhibit 282: Spreadsheet of content about the Romanian election flagged for TikTok 2384
Exhibit 283: Spreadsheet of content about the Romanian election flagged for TikTok 2400
Exhibit 284: Spreadsheet of content about the Romanian election flagged for TikTok 2404
Exhibit 285: Spreadsheet of content about the Romanian election flagged for TikTok 2408
Section VIII: Other Exhibits 2415
Exhibit 286: Radicalization Awareness Network, Malign Use of Algorithmic Amplification
of Terrorist and Violent Extremist Content: Risks and Countermeasures in Place (2021)
2416
Exhibit 287: RAN Policy Support, Violent Extremism and Terrorism Online in 2021: The
Year in Review (2021) 2454
Exhibit 288: Rumble Inc.’s Response to an Order to Produce Records from British
Columbia’s Office of Human Rights (Aug. 31, 2022) 2504
Exhibit 289: Annex: Information on the Risk Assessment Reports (Aug. 11, 2023) 2515
159
Exhibit 290: Commission RFI to TikTok (Oct. 19, 2023) 2519
Exhibit 291: Commission RFI to TikTok (Mar. 14, 2024) 2541
Exhibit 292: Emails from Global Network Initiative staff regarding a panel discussion on
EU and Brazilian technology regulation (May 27, 2024) 2554
Exhibit 293: Event Concept Note: Algorithmic Risk Assessments, Audits, and Fundamental
Rights (May 30, 2024) 2558
Exhibit 294: TikTok’s DSA Passement Report 2024 (Aug. 28, 2024) 2561
Exhibit 295: TikTok Response to Commission RFI (Oct. 2, 2024) 2683
Exhibit 296: Commission RFI to TikTok (Oct. 2, 2024) 2732
Exhibit 297: TikTok Response to Commission RFI: Schedule of Annexes (Nov. 29, 2024)
2748
Exhibit 298: Bing Systemic Risk Assessment (Aug. 2025) 2759
Exhibit 299: Emails between Stanford University staff and European Commission staff
(Aug. 23, 2025) 2892
Exhibit 300: Overview of Standford Dialogue Event: Compliance and Enforcement in a
Rapidly Evolving Landscape (Sept. 24, 2025) 2896
Exhibit 301: List of participants in DSA Stakeholder Event (May 12, 2023) 2901
Exhibit 302: Commission Decision of 5.12.2025 pursuant to Articles 73(1), 73(3) and 74(1)
of Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October
2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital
Services Act); Case DSA.100101, DSA.100102 and DSA.100103 – X (formerly Twitter),
C(2025) 8630 final 2904
Nachtrag 4 19/02/26
Einblicke in den EU-Zensurkomplex (Teil 3): Präsidentschaftswahlen in Rumänien wurden manipuliert – aber nicht von Russland! Ende 2024, kurz nach den Wahlen in Rumänien, wurde der gewählte Präsident Calin #Georgescu kurzerhand abgesetzt, die Wahl mit Verweis auf angebliche russische Einmischung annulliert – auch mit tatkräftiger Unterstützung der EU. Georgescu selbst erklärte damals: „Die EU ist jetzt eine Diktatur. In #Rumänien herrscht Tyrannei!“ Was damals genau passiert ist, hat der US-Justizausschuss intensiv geprüft. Ergebnis: „neue, nicht öffentliche Dokumente lassen Zweifel an den Vorwürfen einer russischen Einmischung aufkommen. (…) TikTok teilte der Europäischen Kommission mit, dass es „keine Beweise” für eine koordinierte russische Kampagne zur Unterstützung des siegreichen Kandidaten Calin Georgescu gefunden habe.“ Der Hammer: Tatsächlich wurde „die angebliche russische TikTok-Kampagne von einer anderen rumänischen Partei finanziert.“

