In the run-up to the midterm elections, the largest platforms changed the rules of the conversation in ways that hit conservative publishers the hardest. Meta said it had “AI-driven integrity systems.” YouTube rolled out an automated “election misinformation filter” that flags political videos for “limited visibility.” X introduced algorithmic throttling for what it calls “borderline content.” To people in conservative media, the pattern is familiar. Liberal-leaning executives steer moderation, and conservative topics lose reach at the exact moment when voters are paying attention.
How We Got Here
After January 6, 2021, the platforms suspended thousands of accounts they said spread election lies and removed posts that glorified the attack. CNN reported that the companies later “pivoted from many of the commitments, policies and tools” they had once embraced. Researchers said trust and safety teams were cut and monitoring access was restricted. The result is fewer outside checks, more automated rules, and a large gray area where legal political speech can quietly sink.
David Karpf, a media scholar, described the business logic behind the swing. “The platforms only ever took this as seriously as they felt like they needed to,” he said. He added that if you want “serious trust and safety,” it must be demanded through regulation or because it helps “the bottom line in the near term.”
Conservatives point to specific changes that matter during election season. Meta’s new AI can downrank posts without public explanation. YouTube’s election filter places videos in a limited bucket where fewer people will see them. X says it reduces distribution for borderline content. The temporary suspensions of pro-life accounts such as LifeNews are part of the same story. So is the retirement of CrowdTangle, which once helped election officials see voter suppression in real time. A Columbia Journalism Review analysis found the replacement “had fewer features and was less accessible.” When a platform can both police speech and hide the evidence of how speech travels, conservatives see censorship by design.
Doctorow’s Chokepoint Thesis
Writer Cory Doctorow calls Big Tech a set of “chokepoints.” Centralized platforms have “one throat to choke.” If governments or powerful actors want moderation changed, they pressure a few executives. He argues that decentralized systems like Mastodon and Bluesky reduce that power because users can move to other servers and keep speaking. He warns that policies such as age verification are “tailor made” for Big Tech and hard for federated networks to implement, which further concentrates power. His message to anyone worried about deplatforming is blunt. Build and use open protocols, not giant platforms. In his words, there should be “protocols, not platforms,” so there is no single switch that can mute an entire movement.
Newsrooms and activists on the right see the same pattern across companies. Meta “upgraded” moderation and reach dipped for sensitive topics. YouTube flagged election-related videos and restricted their visibility. X throttled content that does not break rules but triggers internal labels. At the same time, researchers lost access to platform data. X put its firehose behind “outrageously expensive” fees, while Meta took down CrowdTangle. These moves make it harder to show when suppression occurs, even as it becomes easier to apply quiet penalties.
Robby Starbuck, a conservative influencer who sued Meta over false claims he says were distributed by its AI, warned about the stakes. “You can very easily imagine a scenario where you shift an election by a couple percentage points,” he said, especially if young users believe “everything AI tells them.”
What Has Been Tried in the States and the Courts
Republican-led states passed laws to curb viewpoint discrimination. Texas’s House Bill 20 lets users sue large platforms if they believe they were banned for their political views. Ryan Baasch, a Texas litigator and now a White House economic aide, defended the law. “These social media platforms control the modern-day public square, but they abusively suppress speech in that square,” he told the Fifth Circuit.
The laws face tough First Amendment questions. Attorney Jenin Younes explained the bind. “The Supreme Court said that the social media platforms have, sort of, First Amendment rights as speakers, and so they have the right to censor,” she said. She personally favors “more speech” over removals, but added that “the companies are entitled to do what they want.”
Republican officials also challenged what they call government jawboning. In Murthy v. Missouri, they argued that federal officials pressured platforms to remove content about Covid-19 and elections. The Supreme Court left key state laws blocked for now and signaled skepticism that the First Amendment bars the White House from warning companies about threats, yet it did not resolve the larger dispute. That uncertainty keeps the fight alive as campaigns heat up.
House and Senate Republicans increased the political cost of moderation they view as biased. Chairman Jim Jordan sent subpoenas to Big Tech and pushed for documents on content decisions. Former Twitter staff were grilled over limiting the New York Post’s Hunter Biden story in 2020. In the Senate, Chairman Ted Cruz held a hearing titled “Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans.” He said his report showed that CISA “pressured Big Tech companies to censor Americans that held views different than the Biden administration.” His warning was stark. “No free society can thrive when government censors lawful speech and sets itself up as the arbiter of truth.”
Personnel Signals and the FTC’s Role
Personnel choices tell their own story. President Trump was expected to nominate Ryan Baasch to the Federal Trade Commission. Texas Attorney General Ken Paxton praised Baasch for victories against “Big Tech censorship.” Inside the FTC, Commissioner Melissa Holyoak said addressing unfair censorship is a priority. “Big Tech has become part of the public modern square,” she said. The agency collected nearly 3,000 public comments on platform censorship to guide possible enforcement and inform Congress.
Holyoak framed two core questions. Are platforms misleading users about how their rules are applied. Does dominance let them degrade product quality through skewed moderation. “Ensuring that massive social media platforms allow both the access and the opportunity for people to be able to share their ideas is critical to our country,” she said.
Pushback From Pro-Moderation and Antitrust Skeptics
Not everyone agrees that federal action should police moderation. The International Center for Law and Economics and the Cato Institute warned that using antitrust to punish content rules would politicize competition policy. Cato’s Jennifer Huddleston asked whether this is “actually about concerns related to market behavior, or is this about animosity towards tech companies.” She argued that expanding antitrust beyond competition could let any administration intervene in many markets. She also warned that breakups could make censorship worse if smaller firms have fewer resources or adopt narrower policies to avoid risk. If users dislike Meta’s rules, she said, they can move to X or Truth Social, or to Bluesky or Threads.
Outside monitors are now weaker just as the rules grow more opaque. X’s paywall for data shut out many researchers. Meta removed CrowdTangle and offered a limited successor. Renée DiResta described the pressure on academics who study misinformation. “The investigations have led to threats and sustained harassment for researchers who find themselves the focus of congressional attention,” she wrote. When watchdogs cannot see, quiet throttling becomes harder to challenge and easier to deny.
What This Means Before the Midterms
In practice, AI filters, limited visibility labels, and throttling reduce impressions for conservative publishers when it counts most. With fewer researcher tools and opaque systems, proof is harder to produce even as the effect grows. Lawsuits and hearings have raised costs for biased moderation. Nominations and agency inquiries show a federal focus on fairness. Doctorow’s thesis offers a strategic exit. If conservatives want speech that cannot be quietly muted, they should build audiences where speech routes around chokepoints, not through them.
From a conservative publishing perspective, Big Tech’s quiet war looks coordinated in effect if not always in intent. Rules are vague, algorithms are hidden, access is restricted, and penalties arrive just as voters tune in. The answer blends near-term pressure for transparency and restraint with a long-term migration to decentralized networks. That choice can decide whether a message reaches millions or fades into limited visibility when the country is choosing its leaders.








