A case study in libertarian denail

In the days since the 2021 riot at the Capitol Building in Washington, D.C. various Internet platforms have banned right-wing personalities. I have mixed feelings about this: on one hand, those people were annoying and it's nice not to have them around polluting my feeds. On the other hand, it's concerning that people can be disappeared from the Internet. How should I think about this?

I recently re-read something that helped: The Stages of Libertarian Denial, an underrated Bryan Caplan blog post from 2010. In it Caplan classifies different libertarian responses to demands for government intervention. Which stage best applies to censorship by Big Tech?

Stage 1: Is there actually a problem?


Caplan's first stage is "deny the problem exists." Is it really such a bad thing that Twitter, Facebook, et. al. have decided not to do business with Donald Trump and his allies?


If you're in the Blue tribe, you might think: no, there's no problem - the presence of far-right content made Internet communities worse. What right-wingers did (or cheered on) at the Capitol Building was beyond the pale, and was a good excuse to clean house.


If you're in the Red tribe, you might think: I don't like that "my" side is being targeted here, but social media companies have freedom of association. If I wouldn't force a baker to make a cake for a gay couple's wedding, I shouldn't want to force Twitter to keep give Donald Trump a platform.


But I don't think you can really just shrug and say that this is fine. What’s to stop these companies from deciding your views are bad and worthy of banning? Just because Internet companies aren't government actors doesn’t mean they can’t exercise coercive power - I'm a libertarian and this gives me pause. Corporate censorship hardly promotes free minds and free markets.


Stage 2: Is the problem the government's fault?


Caplan's second stage is "blame the government." Are the Internet platforms purging right-wingers because of some government action?


A conspiracy-minded person might say: the tech companies see which way the wind is blowing. The Biden / Harris administration won’t be pleased if Donald Trump and The Proud Booglaoos (or whatever) are organizing online against them. It will be more likely to scrutinize mergers, threaten antitrust action, and impose regulations. Banning right-wingers is a way to curry favor with the new regime.


Maybe that's true, but this hypothesis seems unnecessary. Employees at Twitter, Facebook, and the rest are generally in the Blue and Grey tribes. They don’t have much sympathy for the Red tribe, and purging them from their platforms has a certain aesthetic appeal. What more is there to explain?


Stage 3: Would government action make things worse?


Caplan's third stage is "Admit that the government didn’t cause the problem, but insist that government action would only make the problem worse." I think this one fits the bill.


Here's an argument in meme format: Tech companies are abusing their power and silencing certain groups! We need to empower politicians like (checks notes) Donald Trump to stop them.


Put another way: don't give a power to Kamala Harris that you wouldn't give to Josh Hawley. The Blue tide will roll out one day and the Red tide will roll in (and vice versa). Imagine that the Obama administration had forced Facebook to ban the radical fringe of the Tea Party. Do you think Donald Trump wouldn't force Instagram to ban BLM?


And the rest


I think we can stop at stage 3. But for completeness, here are the other stages:

  • Stage 4: Concede that government action wouldn’t make the problem worse, but say that the cure is so expensive that we’re better off just living with the problem

  • Stage 5: Admit that government action could solve a problem at a low cost, but claim that the libertarian principle is more important.

  • Stage 6: Yield on libertarian principle, but try to minimize the deviation.


I don't think any of these apply. Government action probably would make the problem worse; it probably couldn't solve it at low cost, and we don't have to be principled libertarians to fear the abridgment of freedom of association.


There is a an argument for floating around that seems to make some sense. It goes like this: Section 230 of the Communications Decency Act shields Internet companies from liability for what their users post. Companies that censor certain users’ viewpoints shouldn’t be afforded this privilege.


But I don't think this is terribly convincing. Should knitting websites be open to lawsuits if they don’t allow non-knitting discussions? Would anybody be happy if Twitter only allowed Blue Checks to post content that had been vetted by libel lawyers? Would companies be able to avoid the regulation with well-crafted terms of service? It seems like a technocratic nightmare.


In the end


When I read blog posts and Twitter messages with policy proposals, I often think to myself “Stage 1” or “Stage 4.” And my friends now know to say, “I know you’ll say this is a Stage 5 problem, but I think intervention is justified...” Maybe that’s taking things too far? In any case, I hope you’ll read Caplan’s article and keep it in mind.