September 4, 2023
On Easter Monday I was laid off from my job at Parler, the nonpartisan free speech platform I had poured my mind and heart into for over 2-1/2 years. I was told it had been sold off in a hurry on Good Friday and the new owner, a marketing firm, was “temporarily” shutting it down.
CEO George Farmer saw no future for Parler, which, before its unjust deplatforming in 2021, was a serious contender. He insisted that Twitter, now owned by “free-speech absolutist” Elon Musk, would dominate. I had expressed my doubts about that conclusion at every opportunity, but now I could only hope Musk’s Twitter would deliver—much as an athlete who lost the semi-finals might root for his opponent to later win the championship.
It seemed possible. Out of the gate Musk published the “Twitter Files,” exposing politicians and bureaucrats who unconstitutionally pressured Twitter to censor “wrongthink”. And he tweeted a fair bit of it himself! But over time, some of us sensed something was off. While many creators gushed about how much freer and more engaged Twitter seemed, plenty of us didn’t notice much difference—even after buying a blue check. In fact, replies to my tweets from one blue-check novelist whom I follow are routinely labeled as “unavailable”, but then appear, clear as day, under the “replies” tab on his profile! Notwithstanding Musk’s repeated promises of transparency about how and why our tweets are throttled, the cause of this remains a mystery.
Enter X’s new “freedom of speech, not reach” policy, touted by CEO Linda Yaccarino in her first interview since joining the company. Unfortunately, it seems, Musk has passed the baton (her metaphor) to a CEO who describes “free expression” as just one of the company’s “foundational core values”. Next to CNBC host Sara Eisen, Yaccarino sounded downright bullish on free speech. But her carefully chosen formulations make clear that her priority is delivering “brand safety” by ensuring that “99.9% of Twitter’s posted impressions are healthy."
What does this mean in practice? Yaccarino explained:
‘X is a much healthier and safer platform than it was a year ago. Since the acquisition, we have built brand safety and content moderation tools that have never existed before at this company. And we introduced a new policy…about hate speech, called ‘freedom of speech, not reach’. So if you’re going to post something that’s illegal…you’re gone, zero tolerance. But more importantly, if you’re going to post something that is lawful, but it’s awful, then you get labeled [and] you get deamplified, which means it cannot be shared. … So [brands] are protected from the risk of being next to that content.’
Yaccarino said deamplifying “hateful content…is one of the best examples of how X is committed to encouraging healthy behavior online.” “Staggeringly,” she said, 30% of users who learn their content has been labeled as hateful, and so can’t be shared, decide to take it down.
In response to Eisen’s follow-up about pornography and “conspiracy theories,” Yaccarino suggested that those, too, are labeled and deamplified under this policy. And when Eisen noted that both Kanye West (expected to return to X soon) and Musk frequently post “awful” content, and each have millions of followers, Yaccarino insisted that X’s Trust and Safety team would nonetheless somehow maintain the “99.9% healthy” statistic. Would they hide West’s or Musk’s tweets from their own followers?
My BitChute colleagues and I consider unfettered civil discourse among people of different backgrounds and viewpoints, as our lodestone. We do this because (1) we agree with Nadine Strossen and other scholars that censorship only exacerbates “hate”; and (2) we know that for any human creation to be a “source of truth,” it must allow for collaboration among minds left free to challenge any idea. Yaccarino, by contrast, spoke of civil discourse almost wistfully. Does she realize that X’s current policies are inconsistent with that goal? In her Xtopia, users learn to ignore the AI behind the curtain, which is responsible not only for delivering those “99.9% healthy” feeds, but also for hoovering up whatever data might profitably be “licensed”. Xtopians are grateful for the opportunity to “be real,” unaware that “brand safety” means their posting of something deemed “lawful but awful” could relegate them to being real in their own personal memory holes, replete with AI-generated fun-house mirrors.
Xtopian “blue-chip” advertisers generously tolerate the fact that X’s gaslit fun-house inhabitants are permitted to think about, post, and upload “lawful but awful” content—the horror!—so long as it never appears next to their brands and is almost never seen by anyone. Yaccarino hopes both cohorts will focus on the “vibrancy” of “real-time communication” among users scrolling, “multiple times a day,” through feeds composed of non-awful, algorithmically arranged content, offered up by creators hoping to “earn a real living” on X.
Can X “encourage healthy behavior online” by sweeping “awful” content—and those who post it—under the digital rug? What’s “lawful but awful” is necessarily subjective—as Yaccarino seemed to acknowledge. Moreover, the only moral and practical way to change long-term behavior is by changing minds. Even transparently imposed penalties won’t suffice; discussion and persuasion are required. And that requires 100% free expression.
Yaccarino should be educating X’s advertisers, not coddling them. By advertising on a platform which is dedicated to free expression, businesses will not only make money, but they’ll also help make the world a better place. They’ll help foster the open debates and discussions necessary to expose and refute “awful” ideas, to solve thorny problems, to share values and flourish.