Proposals to regulate algorithms are tempting, but many likely would run into the same First Amendment barriers as direct speech regulations. The First Amendment protects hate speech, misinformation and a good deal of other harmful speech. Regulating the algorithm would not avoid these problems. Courts have held that laws chilling the distribution of protected speech raise First Amendment concerns.
Some concerns that people raise about algorithms involve the platforms’ collection and use of personal data to target users with harmful content. Congress could address these issues more directly — and without the same constitutional problems as speech restrictions — via a strong national privacy law.
What is the most dangerous bill that has been proposed, and what is the best idea you have seen?
It’s hard to pick just one dangerous bill. I’m concerned about many proposals at the state and federal levels to restrict platforms’ ability to moderate. Conservatives have argued that platforms have unfairly blocked them from expressing their views. But the First Amendment protects platforms’ ability to exercise this discretion, no matter how unfair it might seem. I’m worried about the Klobuchar-Lujan bill, which, during a public-health emergency, would remove platforms’ Section 230 protection for any “health misinformation” that a platform algorithmically promotes in a nonneutral manner. How does the bill define “health misinformation”? It leaves that to guidance issued by the secretary of health and human services. It shouldn’t be difficult to imagine a scenario in which an H.H.S. secretary abuses this remarkable authority to suppress criticism of the administration. This comes far too close to a Ministry of Truth for my comfort.
I’m intrigued by [Stanford Law professor] Nate Persily’s proposal to require platforms to provide outside researchers with access to data. One of the biggest problems with the current debate is the lack of transparency among the large social media companies. The proposal would help to address this and inform the debate. But any such requirement would need to address the very real privacy concerns of providing access to such data. Relatedly, I’ll give a plug for a nonpartisan, expert fact-finding commission that I’ve been proposing for the past few years.
I also like some elements of the PACT Act. The bill contains many reforms, including an exemption to Section 230 if a platform declines to remove content that has been found defamatory in a lawsuit between the subject and the poster. Section 230’s co-author, former congressman Chris Cox, does not think that Section 230 should cover such cases. I agree.
Will tech companies have to rely on other defenses like the First Amendment to protect themselves if Section 230 gets worn down? What does the post-230 world look like?
Many platforms have relied on other defenses in recent years, particularly as judges have increasingly voiced their distaste for Section 230. These defenses often involve more complex judicial inquiries than Section 230, requiring the platforms to engage in costly depositions, document production and other discovery. A trillion-dollar company like Meta could easily afford such expenses (and gee whiz, Meta is calling for Section 230 reforms). But a start-up that wants to be the next Meta probably couldn’t. We don’t know exactly what level of First Amendment protections courts would provide to online platforms, as Section 230’s passage has made it mostly unnecessary for courts to determine that. But the First Amendment precedent, as applied to bookstores and other pre-internet defendants for decades, suggests that even without Section 230, plaintiffs would have a heavy burden to persuade courts to impose liability on platforms.