Today’s post – Censorship, Parler, and Antitrust – by Cory Doctorow of Pluralistic found its way to us through Kyle Rankin of Purism article/sales pitch Parler Tricks. Both talk about some recent deplatforming, especially of social media application Parler.
As Parler disappears from the Android and Ios app stores and faces being kicked off of Amazon’s (and other) clouds, people who worry about monopolized corporate control over speech are divided over What It Means.
There’s an obvious, trivial point to be made here: Twitter, Apple and Google are private companies. When they remove speech on the basis of its content, it’s censorship, but it’s not government censorship. It doesn’t violate the First Amendment.
And yes, of course it’s censorship. They have made a decision about the type and quality of speech they’ll permit, and they enforce that decision using the economic, legal and technical tools at their disposal.
If I invited you to my house for dinner and said, “Just so you know, no one is allowed to talk about racism at the table,” it would be censorship. If I said “no one is allowed to say racist things at the table,” it would also be censorship.
I censor my daughter when I tell her not to swear. I censor other Twitter users when I hide their replies to my posts. I censor commenters on my blog when I delete their replies.
Dress is up as “content removal” or “moderation” if you’d like, but it’s obviously censorship.
That’s fine. Different social spaces have different rules and norms. I disagree with some censorship and support other censorship. Some speech is illegal (nonconsensual pornography, specific incitements to violence, child sex abuse material) and the government censors it.
Other speech is distasteful or hateful (slurs, insults) and the proprietors of different speech forums censor it. This legal-but-distasteful speech is a mushy, amorphous category.
I’m totally OK with hilarious dunks on the insurrectionists who stormed the capitol. Tell jokes about Holocaust victims and I’ll throw you out of my house or block you.
And when I do, you can go to your house and tell Holocaust jokes.
I’m not gonna lie. I don’t like the idea of anyone telling Holocaust jokes anywhere. Or rape jokes. Or racist jokes. But I have made my peace with the fact that there are private spaces where that will happen.
I condemn those spaces and their proprietors, but I don’t want them to be outlawed.
Which brings me back to Parler. It’s true that no one violates the First Amendment (let alone CDA 230) (get serious) when Parler is removed from app stores or kicked off a cloud.
But we have a duopoly of mobile platforms, an oligopoly of cloud providers, a small conspiracy of payment processors. Their choices about who make speak are hugely consequential, and concerted effort by all of them could make some points of view effectively vanish.
This market concentration didn’t occur in a vacuum. These vital sectors of the digital economy became as concentrated as they are due to four decades of shameful, bipartisan neglect of antitrust law.
And while failing to enforce antitrust law doesn’t violate the First Amendment, it can still lead to government sanctioned incursions on speech.
The remedy for this isn’t forcing the platforms to carry objectionable speech.
The remedy is enforcing antitrust so that the censorship policies of two app stores don’t carry the force of law; and it’s ending the laws (copyright, cybersecurity, etc) that allow these companies to control who can install what on their devices.
I got into a good discussion of this on a private mailing list this morning and then I adapted them and published them in the public “State of the World 2021” discussion on The WELL.
There are three posts: the first deals with Apple and Google’s insistence that they removed Parler because it lacked an effective hate-speech filter. Given that there is no such thing as an effective hate-speech filter, this is obvious bullshit.
The second addresses the fundamental problems of moderation at scale, where you are entrusting a large number of employees to enforce policies against “hate speech.”
The biggest problem here is that “almost-hate-speech” is emotionally equivalent to “hate speech” for the people it’s directed at. If tech companies specify hate speech, trolls will deploy almost-hate-speech (and goad their targets into crossing the line, then narc them out).
And if tech companies tell moderators to nuke bad speech without defining it, the mods will make stupid, terrible mistakes and users will be thrown into the meat-grinder of the stupid, terrible banhammer appeals process.
The final post asks what Apple and Google should do about Parler?
They should remove it, and tell users, “We removed Parler because we think it is a politically odious attempt to foment violence. Our judgment is subjective and may be wielded against others in future. If you don’t like our judgment, you shouldn’t use our app store.”
I’m 100% OK with that: first, because it is honest; and second, because it invites the question, “How do we switch app stores?”