Section 230 reform: a solution in search of a problem, which itself causes problems

Some throat clearing before we get into it: thanks to everyone who read and gave me some feedback on the first blog. I'm thrilled with the response and will try very hard to continue to provide something worthy of your time. This post may be the start of a series (no promises, I'm really new to this). I'm fascinated by the long term trend in trust in institutions and feel like I have a million thoughts about it. I'm also fascinated by the interplay (and power play) between policy and business. In here I'm going to discuss the drop in trust in tech companies, and the somewhat related drop in trust in scientists. I argue they are related, I think you'll agree, but I mostly want to discuss the reason they are related and one proposed solution to the problem, reform of Section 230.

We have a long term decline in trust in institutions in the United States. Gallup has consistently polled Americans’ trust in fourteen major institutions since 1979, and that trust has steadily eroded over time.

In this chart you can see the impact of some "shock" events on institutional trust. The drop in 1990 is the beginning of Desert Storm and the breaking of George HW Bush's promise of "no new taxes", and the drop in the mid-late 2000s came during the global financial crisis. Overall institutional trust never recovered from the financial crisis.

Almost every institution they poll is distrusted, with the exception of small business, the military, and in some years the police. Big business has long been distrusted, and while tech companies surpass big business overall in trust, in the second chart below you'll see their trust in tech has been rapidly falling in recent years.

American trust in scientists, and particularly medical scientists, has also taken a hit in recent years, for what I assume are obvious, though not necessarily good, reasons. Below is a chart from a Pew survey on institutional trust, which is a bit different from the Gallup survey.

Confidence in Medical Scientists, Scientists, and the Military

The pandemic coincided with the rapid deterioration of trust in tech companies, and the drop in trust in medical scientists and scientists in general, and given the media coverage of it during that time, it's pretty clear that one of the main drivers of this distrust is the problem of "misinformation" (and I'll use this term to cover both misinformation and disinformation insofar as you discern between them). Over the last three years, huge momentum is building behind the cause of solving misinformation, and some of the proposed solutions have massive implications for businesses, both inside and outside the tech industry.

Lawmakers have latched onto two competing partisan stories about misinformation and distrust in tech (and, in turn, reasons why tech companies need to be regulated further). Those stories have some neat logical corrolaries to distrust in scientists. The first story is that people are tired of tech companies failing to clean up the mess their own scale has caused, and allowing misinformation to proliferate all over their platforms. During the pandemic, this misinformation poisoned the minds of some of the less discerning platform users and turned them against the noble scientists who were trying to solve the massive problem posed by the pandemic. The alternative story is that tech companies were too aggressive in policing their platforms, and their biased leadership banished good information that did not conform with their worldview from these platforms. During the pandemic, scientists played a critical role in pushing this information purge, seeking to drive people into a single lane solution to a very complex problem, and they are rightfully losing trust because of their complicity in this larger problem of information policing. Somewhat hilariously, to me, the purveyors of both of these seemingly irreconcilable stories have converged on the same solution, more and bigger paydays for lawyers reforming Section 230.

What this bit of law does, in its current form, is make it possible for tech companies to moderate content on their platforms without coming under civil liability. This is useful for big tech companies and platforms that want to make their platforms a welcome environment for advertisers, who can have the peace of mind that their advertising content will not be placed next to objectionable or obscene diatribes. It's also great for small time operations, like bloggers, who want to host their own blogs and allow users to leave comments on the blogs, but reserve the right to delete comments they don't like for whatever reason. It's hard to understate how much Section 230 contributes to our ability to experience the internet the way we do today. However, some lawmakers and activists think that Section 230, while great when it was enacted in 1996, does not work well in the modern technological environment and lets tech companies off the hook for bad behavior far too often.

The arguments those lawmakers and activists make align with the two partisan stories I outlined earlier. On one side they say the power of Section 230 frees platforms to allow the amplification and proliferation of misinformation in ways that deeply impact our society, and they make bad company policy to confront it because they are unaccountable. On the other they say platforms hide or delete information that does not fit a preconcieved worldview that tech leaders and their financiers buy into and that they should be held accountable for silencing people whose message does not fit that worldview.

I don't think either argument has much merit, and the second is falsifiable by a cursory glance at the top link posts on Facebook. However, the first has the benefit of attempting to address misinformation, so it is worthwhile to at least consider whether misinformation is a problem we need to address, and if so whether reforming Section 230 is a good way to address it. My criteria for whether it needs to be addressed are that the problem has to be new, and if it is not new it must be substantially worse than it was before.

Misinformation has been around forever in some form, and while there is new technology that allows it to proliferate, that does not make the problem, itself, new. From an analysis of misinformation:

"Journalists and politicians have become ensnared in a symbiotic web of lies that misleads the public."

That comes from this Harvard Business Review article from 1995. There are plenty of historical examples of concern about the scourge misinformation being peddled by media, who were, perhaps, prodded by the government. So the problem is not new, nor is either partisan framing of it, but the scale might be.

In order to evaluate the scale of the problem, I looked for data that quantified how much misinformation was being accessed as compared to credible information. I struggled to find much data quantifying the actual problem of misinformation but I found tons of polls covering people's feelings misinformation. I should note that, given my relatively long career built on finding data on all kinds of topics, this immediately made me think there wasn't a big problem here. Whenever a topic like this gets substantial media traction, there are tons of researchers working every data angle because there's tons of benefit to finding damning information, and if a convincing smoking gun was out there it would be pretty easy to find.

Finally I found some promising data from Axios, but I immediately spotted a substantial flaw in the data. From Axios:

By the numbers: In 2020, nearly one-fifth (17%) of engagement among the top 100 news sources on social media came from sources that NewsGuard deems generally unreliable, compared to about 8% in 2019.
NewsGuard found that its top rated "unreliable" site, The Daily Wire, saw 2.5 times as many interactions in 2020 as 2019.
Bongino.com increased engagement by more than 1700% this year.

My personal views of their accuracy aside, naming major conservative news sites, like The Daily Wire, as unreliable sources indicates a severe bias in the data, and a bias that would be highly exacerbated by the pandemic. I can't imagine putting The Daily Wire in a category separate from, say, Fox News, which is simply part of a normal conservative consumer's information diet. But making these kinds of data assumptions and choices is perfect for an analysis by an organization like NewsGuard, which is positioned to profit greatly from new regulation of online information. So I continued looking for more reliable sources.

The best analysis I found is from Harvard's Kennedy School. They looked not only the prevalence of misinformation in the average person's media diet (~5%), but they examined whether combating misinformation was more effective than improving acceptance of reliable information, in other words, good old persuasion. I encourage you to read it for yourself, but the end result is that improving acceptance of reliable information is more effective than attempting to combat misinformation. It makes sense outside of the technical analysis - people do not like to be told they've been duped, but they don't mind learning if the material is presented effectively.

There's no doubt that we have more access to information, and in turn misinformation, than ever now. But, given a lack of convincing data, there's no real reason to think the numerator (misinformation accessed) is growing meaningfully faster than the denominator (all good information and misinformation accessed). So we have a problem that has existed for all of modern time, and arguably time immemorial, no evidence that the problem is getting substantially worse, and evidence that insofar the problem exists, it is best solved by persuasion. In my view this is not a problem that needs a regulatory solution. But perhaps you think tech companies have gotten away with too much for too long, and whether misinformation is a problem or not, we should still reform Section 230 to make them more accountable for their content moderation choices.

Well, Meta agrees. They have been lobbying for regulatory reform and new regulation, and running ads about it since misinformation became such a hot topic.

And Meta's eagerness to get more regulated gives away the game here. Mark Zuckerberg's history as a leader of the company is clear, he loves to wrap the company's mission and activity in language about improving the world, but the company's actions under his leadership are always shrewd and calculated. Meta knows that additional regulation improves their competitive position by erecting more barriers to entry for competitors, and enables them to go after competitors for violations of these new regulations. They can rely on their massive scale and army of lawyers to get them through the worst red tape the US government can muster. They also stand to reap some PR benefits since the new regulations would enable to them to rely on the regulation, and the case law that would come from it, more than their own internal policies to make decisions about content on their platforms, giving them the ability to direct users' and critics' complaints to the US government. Other big tech companies would similarly benefit, which is why they have been saying they'd play ball with the government for more than two years now. They know it will stifle innovation, but that's a long term benefit for them. Not so much for the rest of us who will have less choice in our tech platforms as the industry concentrates, and the companies all uniformly outsource their content moderation decisions to an appointed official who swaps out every four years at the most. Also not great for those of us who now have to consider how these regulatory reforms will impact our businesses that simply want to interact with customers online without additional legal considerations.

I want to be clear, I am not arguing that we should let tech companies completely off the hook, but I think there are more thoughtful ways to regulate real problems driven by modern tech companies. We might consider improving user privacy rights, which many of the tech companies seem to anticipate is coming anyway. Facebook conducted one of the most unethical research projects I've ever seen. In 2012 Facebook conducted a user manipulation experiment where they used the news feed to noticeably change user moods and behaviors, it was discovered in 2014 and the news barely got any traction. Nowadays, similar methods with more than ten years of technological improvement could potentially be used to optimize human behavior. Those are the kinds of problems that need solutions.

But Section 230 reform is a sledgehammer solution in search of a problem, the impacts of which fall far beyond their intended targets.

Subscribe to Signal-Noise Ratio

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe