Everything You’ve Heard About Section 230 Is Wrong

These hallowed 26 words shield internet companies from being held responsible for what people post and share. But the web’s most sacred law is a false idol.
A Sacred Tablet of Section 230
ILLUSTRATION: ZAK TEBBAL

The lie was sort of funny, until it wasn't. In the weeks after the 2020 election, as Donald Trump’s quest to remain in office met one courtroom defeat after another, his shrinking legal team concocted a baroque conspiracy theory to explain how the presidency had been stolen. At a surreal press conference in late November, Trump’s lawyers, Sidney Powell and Rudy Giuliani—the latter dripping with mysteriously brown-tinged sweat—explained that Dominion Voting Systems was secretly linked to rival voting machine company Smartmatic, both of which, they said, had been created in Venezuela under the direction of Hugo Chávez for the purpose of systematically rigging elections. George Soros and the Clinton Foundation were possibly in on the scheme as well.

This article appears in the June 2021 issue. Subscribe to WIRED.

This performance was met with widespread derision; on Twitter, Trump’s own recently fired election security czar, Christopher Krebs, called it “the most dangerous 1hr 45 minutes of television in American history.” But among millions of Trump supporters, the allegations of electoral fraud caught fire. Newsmax, One America News Network, and Fox broadcast the claims to their cable TV audiences, with Lou Dobbs referring to Smartmatic as “a company that was founded in 2005 in Venezuela for the specific purpose of fixing elections.” In no time the conspiracy theory was racing through the right-wing precincts of social media, where it erupted into bizarre memes and frenzied calls to action. The journey from farce to tragedy reached its nadir on January 6, when a mob of Trump supporters, urged by the out­going president to “stop the steal,” violently stormed the US Capitol.

They say a lie gets halfway around the world while the truth is still tying its shoes. In this case, the lie seemed to have circled the globe, gotten the truth in a headlock, and then plowed it backward across the National Mall.

But then something unexpected happened: The truth got a lawyer. In February, Smartmatic filed a defamation lawsuit for more than $2.7 billion against Fox, Giuliani, and Powell. Dominion filed suits of its own seeking more than $1 billion each from Fox News, Giuliani, Powell, and Trump mega-supporter Mike Lindell, the CEO of MyPillow, who had helped spread the vote-fixing claim. Suddenly, with money on the line, the TV networks grew more circumspect. Fox and Newsmax ran awkward disclaimers renouncing their own hosts’ coverage. Fox Business canceled Lou Dobbs Tonight, its highest-rated show, a day after Smartmatic named Dobbs as a defendant. A Newsmax anchor tried to cut off Lindell when the pillow tycoon began veering into Dominion territory during an on-air interview; the host eventually walked off the set in frustration.

America is a liar’s paradise. The First Amendment gives wide berth to hucksters, charlatans, and gaslighters under the wise premise that the government generally shouldn’t get to decide what’s true and what isn’t. But the legal system does impose certain limits on speech. The Smartmatic and Dominion lawsuits showed that there can, in fact, be a significant cost associated with inventing, popularizing, and perhaps profiting off of such a Big Lie.

Not for everyone, though. As some commentators noted, one group was conspicuously absent from the cast of defendants accused of amplifying the voting machine myth: social media companies. Unlike traditional publishers and broadcasters, which can be sued for publishing a defamatory claim, neither Facebook nor YouTube nor Parler nor Gab had to fear any legal jeopardy for their role in helping the lie spread. For that, they have one law to thank: Section 230 of the Communications Decency Act.

Passed in 1996, Section 230 provides that online platforms—or “interactive computer services,” in the legislative argot of the time—generally can’t be held legally responsible for material posted by users. Among the public, this sweeping indemnification remained a pretty obscure fact of life on the internet for the first two decades of the law’s existence. But in the past few years, amid a general fit of panic over today’s platform giants and their possible incompatibility with civilization, democracy, and human flourishing, Section 230 has fallen under a cloud of scrutiny. Strong opinions about it have multiplied—as have threats to repeal it from both sides of the aisle in Washington.

Democrats argue that Section 230 lets companies get away with doing too little moderation; Republicans tend to say it lets them get away with too much. Still, there may be just enough bipartisan overlap for reform legislation to emerge from the gauntlet of Congress. So far, there is no consensus on what that reform should look like. The resolution of this tangled debate could have massive consequences for the internet, not only in the US, but in every country where online discourse takes place on platforms that are subject to American law.

This reckoning has all the makings of a barbarians-at-the-gate moment for the companies that benefit from Section 230’s protections. But not only for them. Over the years, Section 230 has attracted a small but ardent following of people who view it with the kind of idealistic veneration more often reserved for the First Amendment. According to its admirers, Section 230 is the wellspring from which everything good about the modern internet emerged—a protector of free speech, a boon to innovation, and a corner­stone of the American economy. The oft-quoted title of a book by the lawyer Jeff Kosseff captures this line of thinking well. It refers to the law’s main provision as “the 26 words that created the internet.”

Another article of faith among Section 230’s champions? That people who criticize the law have no clue what they’re talking about. Section 230 recently turned 25 years old, and the occasion was celebrated by a virtual event whose sponsors included Twitter, Amazon, and Yelp. Senator Ron Wyden and former congressman Chris Cox, the authors of the statute, fielded questions from the audience, typed into a chat window. The most upvoted question was, “How best can we get folks to properly understand Sec 230? Particularly when it seems that many are either reluctant to realize they don’t understand or, even worse, they don’t want to understand?”

Exhibit A for these Section 230 advocates is the moment in May 2020 when Trump started publicly attacking the law, thrusting it into the national shouting match. Trump’s preferred platform, Twitter, had recently had the temerity to fact-check one of his tweets. Trump’s response took a cue from some other Republican provocateurs, most notably senators Ted Cruz and Josh Hawley, who have popularized a theory that Section 230 gives social media platforms legal cover to discriminate against conservatives. Heading into the November presidential election, hostility toward the law grew into one of Trump’s favorite talking points. “Big Tech, Section 230, right?” he mused to an Ohio crowd in October. “Big Tech is corrupt.”

Trump’s opponent was not much friendlier to the statute. In January 2020, then candidate Joe Biden, in response to a general question about the power of tech platforms, blurted out that “Section 230 should be revoked, immediately should be revoked.” The comment seemed to stem from Biden’s lingering anger over a misleading attack ad against him that Facebook had refused to block.

Neither man’s beef with the law is terribly coherent. Section 230 shields platforms from legal liability, but there isn’t anything unlawful in the first place about a sharp-­elbowed attack ad that bends the truth. Ditto for Trump’s complaints: Even if social media platforms did discriminate against conservative viewpoints, it’s perfectly legal to have a partisan bias, as every waking second of American life makes clear. More generally, politicians and pundits often seem to blame Section 230 for whatever they happen to dislike about the internet, whether or not it really applies—or they lash out at the law simply because they know it’s precious to companies they loathe.

So, yes, a lot of people who complain about Section 230 don’t know what they’re talking about. And yet the story told by the pro-230 camp contains its share of mythology as well. Section 230 is not the bogeyman of Trump’s stump speeches, but neither is it the pixie dust making the internet a magical place for free speech and innovation. To understand the law, you have to know not just what it says but how it came to be and how it has been interpreted—and sometimes misinterpreted—by judges during its 25-year existence. Once you do that, the picture that emerges is very different from the one painted by either side of the kill-it-or-keep-it debate.

In fact, Section 230 may be more like Dumbo’s supposedly magic feather: a talisman the internet has been clutching for dear life for 25 years, terrified of finding out whether online discourse could fly without it.

II
The Moderator’s Dilemma

Section 230 is often described as a law about free speech—a sort of First Amendment for cyberspace. But it’s really about a much less glamorous area of law: torts.

Tort law is how the legal system holds people responsible when they wrong someone else. (Tort is French for “a wrong.”) It is part of the common-law tradition stretching back to medieval England, when judges, weighing in on a single dispute at a time, gradually shaped the law of the land. One area of tort doctrine—defamation—is particularly relevant to Section 230. A defamation case is when you sue someone who told a lie that hurt your reputation. Or, even more relevantly, someone who published that lie. Under the so-called republication rule, if I falsely claim that you committed a crime, and a newspaper prints that claim, you can sue both the newspaper and me. So news organizations have to be very careful about reporting incendiary accusations. (If the accusation concerns a public figure, American publications can be a little less careful. The Supreme Court ruled in the 1960s that public figures can win a defamation suit only if they can prove the lie was made deliberately or recklessly.)

In the early days of the internet, it wasn’t clear how judges would apply the republication rule to online platforms. The first case to test the waters was Cubby v. CompuServe, decided in 1991 in a federal district court. CompuServe was one of the first major US internet service providers, and it hosted a number of news forums. A company called Cubby Inc. complained that someone had posted lies about it on one of those forums. It wanted to hold CompuServe liable under the republication rule, on the theory that hosting a forum was analogous to publishing a newspaper. But the judge disagreed. CompuServe, he observed, didn’t exercise any editorial control over the forum. It was basically a passive host, more like a distributor than a publisher. “CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so,” he wrote in his opinion.

ILLUSTRATION: ZAK TEBBAL

The Cubby decision was a relief to the nascent internet industry. But if CompuServe avoided liability because it didn’t moderate its forums, did that mean a provider would be held liable if it did moderate its platform?

Four years later, a state judge on Long Island answered that question in the affirmative. This time the defendant was Prodigy, another giant online service provider in the early internet era. An anonymous user on Prodigy’s Money Talk bulletin board had posted that the leaders of an investment banking firm called Stratton Oakmont were a bunch of liars and crooks. Stratton Oakmont sued for $200 million, arguing that Prodigy should be treated as a publisher.

Unlike Compu­Serve, Prodigy proudly advertised its ability to screen content to preserve a family-friendly environment. Judge Stuart Ain held that fact against the company. He seized on comments in which Prodigy’s head of communications compared the company’s moderation policies to the editorial decisions made by a newspaper. “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than Compu­Serve and other computer networks that make no such choice,” he wrote in his opinion. The company could be held liable as a publisher.

It was just one case, in one New York state trial court, but it put the fear of God into the tech industry. Ain’s logic set up the ultimate perverse incentive: The more a platform tried to protect its users from things like harassment or obscenity, the greater its risk of losing a lawsuit became. This situation, sometimes referred to as the moderator’s dilemma, threatened to turn the growing internet into either an ugly free-for-all or a zone of utter blandness. Do nothing and filth will overrun your platform; do something and you could be sued for anything you didn’t block.

To counteract Ain’s decision, a pair of congressmen, Republican Chris Cox and Democrat Ron Wyden, teamed up to find a legislative solution to the moderator’s dilemma. At the time, Congress was working on something called the Communications Decency Act, a censorious law that would criminalize spreading “indecent” material online. Cox and Wyden came up with language that was inserted into the bill, and that became Section 230 of the act. Much of the rest of the decency law would be struck down almost immediately by the Supreme Court on constitutional grounds, but Section 230 survived.

For such a consequential statute, Section 230 is unusually concise. There are two key provisions. The second, subsection (c)(2), says, “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

Translation: Forget the Stratton Oakmont case. A platform can protect its users without putting itself in legal jeopardy.

But it’s the first part of the law, subsection (c)(1), that has proven more consequential. Cox and Wyden understood that the potential volume of content on interactive platforms was so immense that internet companies could never exercise the same level of control as traditional media. Treating internet providers as publishers could make them too cautious, stifling the potential of the internet as a medium for free expression. And so Cox and Wyden decided to establish a baseline degree of legal immunity for platforms, regardless of whether they engaged in content moderation. They did this in the famous 26 words: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

One part of Section 230 got rid of the moderator’s dilemma. The other, however, would end up creating dilemmas of its own.

III
Full Immunity

On April 19, 1995, Timothy McVeigh detonated a bomb in front of the Alfred P. Murrah Federal Building in Oklahoma City, killing 168 people in the deadliest single act of domestic terrorism in US history. Six days later, some very strange posts began appearing on an AOL bulletin board, advertising “Naughty Oklahoma T-Shirts” that featured phrases mocking the victims of the bombing. The ads, posted by a user with the handle KEN ZZ03, listed a phone number to call to order the shirts.

That phone number belonged to Kenneth Zeran, a Seattle-based TV producer and artist. Zeran did not post the ads and had no idea who had. (To this day, the identity of the poster remains a mystery.) Before long he was inundated with threatening phone calls from people understandably outraged by the tastelessness of T-shirts with slogans like “Visit Oklahoma … It’s a blast!!!” Things got even worse when a radio host encouraged his listeners to call Zeran and give him a piece of their minds. According to Zeran, AOL didn’t do nearly enough to deal with the problem, despite his repeated requests for help. Eventually, he got a lawyer, and in April 1996—two months after the passage of Section 230—he sued AOL in federal court.

Prodigy and CompuServe had taken their turns on the witness stand. Now the last of the old Big Three online service providers would get its moment, and this time the outcome would cement the future of internet law.

AOL raised the brand-new statute in its defense, arguing that it couldn’t be held responsible for posts by its users. Zeran’s lawyers countered that their case didn’t actually rely on the republication rule. They sued AOL for negligence, and as a distributor, not a publisher. Once AOL was put on notice, they argued, it had a duty to try to block the posts.

Zeran’s case was the first crucial test of how Section 230 would be interpreted by judges. The text said that an interactive computer service couldn’t be treated as the publisher of information provided by someone else. But did that mean it couldn’t be held responsible at all? Or were other forms of liability, like negligent distribution, still on the table?

They were not. In an opinion for the Court of Appeals for the Fourth Circuit, Judge J. Harvie Wilkinson III, a prominent conservative, ruled in favor of AOL. Section 230, he noted, was designed for exactly this type of situation. It might make sense to hold a traditional distributor liable for defamatory material once it’s put on notice, Wilkinson reasoned, but “the sheer number of postings on interactive computer services would create an impossible burden.” The ruling went further than protecting platforms from defamation suits. Section 230, Wilkinson held, “plainly immunizes computer service providers like AOL from liability for information that originates with third parties.” What kind of liability? What kind of information? Any kind, apparently.

Wilkinson’s use of the word “immunity,” which isn’t in the statute itself, was key. A legal immunity allows a defendant to swat away a lawsuit with a minimum of time and money—even if every fact the plaintiff alleges is true.

Because it was the only case interpreting this brand-new law about this brand-new domain called the internet, Wilkinson’s decision assumed the status of a quasi–Supreme Court precedent. Courts around the country immediately began citing Wilkinson’s “immunity” line to dismiss cases brought against internet companies at the earliest stage of litigation. They often did this grudgingly, essentially concluding that the law forced their hand. “While Congress could have made a different policy choice, it opted not to hold interactive computer services liable,” noted one early decision citing Zeran.

Wilkinson’s ruling revealed a paradox at the heart of Section 230. The law was supposed to encourage online service providers to police their platforms without fear. It was, after all, part of a statute called the Communications Decency Act. And yet the first part of the law, the part Wilkinson now interpreted as an immunity, removed a major legal incentive for them to police their platforms at all. With a few exceptions (most notably copyright infringement and child pornography), providers would never be held responsible for material posted by users, no matter how clearly false or harmful. Even if they were put on notice, could easily fix the problem, and simply chose not to.

It must have been hard to see at the time how consequential that position would become. Zeran was decided in 1997. Just 2 percent of the world’s population was online. Over the subsequent decades, the internet would spread into more and more aspects of daily life. Section 230 and its peculiar set of incentives would spread with it.

IV
Bad Samaritans

If there was one moment when Section 230 started revealing its potential to make things really weird, it was in 2003, in a case called Batzel v. Smith. Ellen Batzel was a successful lawyer. Robert Smith was a handyman she’d hired to do some work on her house. They seem to have had some kind of falling out. In 1999, possibly to get revenge, Smith sent an email to Ton Cremers, the Dutch publisher of a listserv called Museum Security Network, claiming that Batzel’s house was full of art that had been stolen from Jews during World War II. He also wrote that Batzel had bragged about being Heinrich Himmler’s granddaughter. Cremers’ interest was piqued. He forwarded Smith’s email to his listserv audience and posted it on the MSN website.

Batzel, understandably, was not amused. The allegations, she said, were a pack of lies. She sued Smith for writing the email, arguing that it had ruined her professional reputation. But she also sued Cremers for publishing it to his audience.

At first glance, these facts don’t look like a Section 230 situation. Recall the law’s underlying theory: The scale of the internet prevents online platforms from moderating every word or image uploaded by users. Cremers, however, wasn’t a tech company sitting atop a tsunami of user-generated content. He was a guy who forwarded and published an email. Batzel wasn’t suing him for failing to take something down; she was suing him for choosing to put something up.

And yet Cremers’ lawyers raised the statute in his defense. They argued that because the email from Smith was technically “information provided by another information content provider,” Cremers couldn’t be sued for spreading it online. The case went up to the Ninth Circuit Court of Appeals. In 2003 the court sided with Cremers. “Because Cremers did no more than select and make minor alterations to Smith’s e-mail, Cremers cannot be considered the content provider of Smith’s email for purposes of §230,” wrote judge Marsha Berzon, a prominent liberal, for the majority.

How did the court come to that decision? Simple. The judges did what US judges often do: They read the statute extremely literally. Section 230, remember, says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” And it defines an “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” Put those pieces together and it does kind of sound like you can’t get in trouble for publishing something someone else emailed you.

This outcome, however, seems so far from the law’s original intent that even Section 230 coauthor Chris Cox believes Batzel was wrongly decided. Cox says that Section 230 was never supposed to let people on the internet get away with the exact same behavior that would land them in trouble offline. But if Cremers had done what he did using a bunch of envelopes and stamps, instead of email, he wouldn’t have been able to hide behind the statute. As the torts scholar Benjamin Zipursky has observed, in an article criticizing the logic of Batzel, “Anyone wishing to hurt another person by damaging her or his reputation is free to do so without accountability by finding a defamatory statement that someone else has made and broadcasting it to the world over the Internet.”

ILLUSTRATION: ZAK TEBBAL

Nevertheless, the decision in Batzel v. Smith has been cited and followed by courts around the country, perhaps because it seems to follow the straightforward text of Section 230. One particularly galling example involves a website called The Dirty. The Dirty is a gossip site in the style of TMZ—except its subjects are ordinary people, not celebrities. A typical post involves a picture of a woman, often scantily clad, along with her full name and detailed allegations that she’s a cheater, a gold digger, or worse. In 2009, the victim of several such posts, a schoolteacher and Cincinnati Bengals cheerleader, brought a lawsuit against the website’s parent company and founder after posts appeared on The Dirty accusing her of sleeping with the entire football team and spreading sexually transmitted diseases—rumors that she said went viral at the high school where she taught.

The case made it all the way to the federal Court of Appeals for the Sixth Circuit, which held in 2014 that Section 230 protected The Dirty because the posts were submitted by users. The logic is a direct descendant of Batzel; indeed, the ruling cited Batzel eight times. As the opinion itself laid out, The Dirty wasn’t a message board or a social network where users could upload whatever they wanted. It received thousands of submissions each day, and founder Nik Richie or his staff would read them and select 150 to 200 to publish, often with some added commentary. It was, obviously, a publication that relied on outside submissions. And yet, following the literal logic of Batzel, the court held that Section 230 shielded it from liability.

Section 230 was supposed to protect websites that wanted to do the right thing. The very heading of the statute reads, “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” (This is an echo of a common concept in tort law. All 50 states have some kind of Good Samaritan law that protects people who help in an emergency from being sued if things go wrong.) But cases like the lawsuit against The Dirty show how easy it is for the statute’s immunity provision to accomplish precisely the opposite result. As the legal scholars Danielle Citron and Ben Wittes put it, Section 230 has become a law that ensconces protections for “bad Samaritans.”

At least one company relies so heavily on Section 230 that its website has an entire section dedicated to explaining the law. RipoffReport.com is a repository of horror stories about businesses and individuals, like a Yelp that specializes in one-star reviews. It has a policy of never taking a post down, which it says is to preserve its credibility. But the site also encourages businesses marred by bad reports to pay several thousand dollars for its Corporate Advocacy Program. That fee buys those businesses a new, positive post if they pledge to make things right, which the site promises will appear more prominently in Google results than the original review. Though Ripoff Report disputes the characterization, its business model appears to be: Nice reputation you’ve got there; shame if anything were to happen to it. (A recent investigation by The New York Times found a whole ecosystem of websites whose owners profit by selling “removal” services to the people being maligned on them.) Ripoff Report has been sued repeatedly for defamation. But because it doesn’t create the content of the posts itself, courts have consistently held that Section 230 protects it.

The same theory protects the Craigs­list of guns. In October 2012, a Wisconsin man who was barred from owning a gun (because his estranged wife had taken out a restraining order on him) found an easy workaround. He went to an online marketplace, Armslist.com, found a private seller, and bought a semiautomatic handgun from a guy in a McDonald’s parking lot. The next day, he went to the salon where his wife worked and shot her to death, along with two of her coworkers, before turning the gun on himself. The wife’s daughter sued Armslist for negligence and wrongful death, among other claims. Her lawyers argued that the company had essentially set itself up to facilitate illegal gun sales. It allowed users to filter their searches to show only private sellers, who don’t have to run background checks. And it allowed anyone to buy or sell guns, taking no steps to screen out people barred from owning one.

None of that mattered to the Wisconsin Supreme Court. Even if the claims in the lawsuit were true—that is, even if it could be proven that Armslist intentionally facilitated illegal gun sales—Section 230 immunity applied. As long as Armslist didn’t help create posts itself, it was in the clear. That doesn’t mean the company would necessarily have lost the case otherwise, or that it will never have to worry about federal criminal prosecution. But it does mean, at a minimum, that thanks to Section 230, the families of people murdered with guns bought on Armslist can’t even force the site’s owners to defend their business practices in court.

The Armslist case is also telling for another reason: It was about commerce, not self-­expression. In this respect, it is part of a robust line of Section 230 decisions that invoke the law to protect platforms devoted to business transactions. Free trade, not just free speech. Craigs­list has used the law to ward off liability for hosting racially discriminatory housing ads. Companies like StubHub, eBay, Amazon, and Airbnb regularly invoke the statute as a defense against lawsuits and to avoid complying with regulations. They all describe themselves as platforms that host content—ticket offerings, apartment listings, products—created by third parties. These arguments don’t always succeed, but sometimes they do. One court recently ruled that Section 230 protects Amazon from liability for false advertising. Airbnb’s and HomeAway’s efforts to use Section 230 to stop municipalities from regulating them failed in San Francisco, but they worked in Anaheim. “After considering federal communications law, we won’t be enforcing parts of Anaheim’s short-term rental rules,” a city spokes­person said.

V
“Better Than the First Amendment”

You might think, given the facts of some of these Section 230 cases, that the law would have become rather controversial. In fact, within the world of internet lawyers and academics that would usually debate such things, and certainly within the tech industry, Section 230 was for years considered “a kind of sacred cow—an untouchable protection of near-­constitutional status,” writes Danielle Citron, a law professor at the University of Virginia.

The Electronic Frontier Foundation, for example, calls Section 230 “the most important law protecting internet speech.” So does Twitter CEO Jack Dorsey.

Citron, who won a MacArthur Fellowship in 2019, recalls giving a talk at a 2008 conference in which she proposed amending Section 230. She was just starting her career as an academic, and a well-known older professor approached her afterward. “Danielle, really happy to meet you, but you basically want to jail communists,” she recalls him saying. “Your challenging Section 230 is like stabbing the First Amendment in the heart.”

Discussions about Section 230 began to creep beyond the esoteric boundaries of internet-law conferences in late 2017, as Congress debated amending the law to carve out an exception for lawsuits based on sex-trafficking allegations. This was also around the time when Ted Cruz, crusading against the alleged scourge of anti-­conservative bias in Silicon Valley, started insisting falsely that Section 230 required social media platforms to maintain a “neutral public forum.” The law had entered the zeitgeist, or at least a small corner of it. But as journalists started writing about Section 230, we tended to describe it in the same reverent tones that Citron encountered at the 2008 conference. “Lawmakers Don’t Grasp the Sacred Tech Law They Want to Gut,” read a WIRED headline in 2018.

One man who has had an outsize effect on the way Section 230 is treated in public discussion is Eric Goldman, a professor at Santa Clara University School of Law, where he codirects the High Tech Law Institute. He is also a prolific blogger, keeping seemingly exhaustive tabs on the latest developments in internet law, including rulings involving Section 230. Goldman has been writing about this area for so long that the Stratton Oakmont ruling, which Section 230 was created to overturn, cites a paper he published in 1993 while still in law school.

“He’s had extraordinary influence,” says Mary Anne Franks, a law professor at the University of Miami and the president of the Cyber Civil Rights Initiative (where Citron is vice president). “In part because he’s a very smart person and he’s a scholar. It’s one thing when you’ve got people who are quite obviously tech lobbyists or part of the industry—that will only carry you so far. It really does mean something when you convince people in the scholarly community.” Goldman’s impact has been even greater in the media. He has for years been journalists’ go-to source on all things Section 230. He’s a reporter’s dream: encyclopedically knowledgeable, articulate, personable, and easy to get on the phone.

But Goldman is not only Section 230’s most up-to-speed observer; he may also be its biggest fan. When reporters call him for an expert quote, they get a very particular perspective—one capably summarized in the title of his 2019 paper, “Why Section 230 Is Better Than the First Amendment.” In Goldman’s view, the rise of platforms featuring user-generated content has been an incredible boon both to free speech and to America’s economic prosperity. The #MeToo movement; the more than $2 trillion combined market cap of Facebook and Alphabet; blogs, customer reviews, online marketplaces: We enjoy all of this thanks to Section 230, Goldman argues, and any reduction in the immunity the law provides could cause the entire fortress to crumble. No domain of user-generated content would be safe. If the law were repealed, he recently told the Committee to Protect Journalists, “comments sections for newspapers would easily go.”

Other guardians of 230 sound even more apocalyptic notes when the law comes up for debate. After a group of Democratic senators proposed a bill to limit the law’s protections in early February, Mike Masnick, founder of the venerable policy blog TechDirt, wrote that the changes could force him to shut down not just the comments section but his entire website. Section 230 coauthor Ron Wyden, now a US senator, said the bill would “devastate every part of the open internet.”

The stakes for online discourse, these arguments suggest, simply couldn’t be higher. You may be horrified by situations like the Armslist case or The Dirty, or any number of cases we don’t have room to talk about in which victims of harassment, bullying, and revenge porn have been unable to force internet platforms to take action. But would you be willing to trade everything you love about the online world to try to address those problems by reforming Section 230?

These apocalyptic arguments are only powerful, however, if they’re true. So let’s ask the question: What would the world look like if Section 230 had never been passed?

VI
Alternate History

One thing's for sure: Today’s social media giants could not exist under the version of libel law applied in the Stratton Oakmont case. That decision, remember, said that a platform assumes the same liability as a publisher if it engages in any moderation whatsoever. But it’s almost unimaginable that one New York trial judge’s ruling, which sparked an immediate backlash, would have become the law of the land. Recall that defamation falls under common law, which is developed by judges over time as they apply precedents to new situations. For torts involving user-­generated content, Section 230 aborted that process before it could begin. If the law hadn’t passed, judges in other jurisdictions would have gotten the opportunity to craft more reasonable applications of tort law to the new digital world.

In fact, another New York case against Prodigy that was initiated before Section 230 was passed offers an example of what this judicial path might have looked like. In Lunney v. Prodigy Services Co., a father sued Prodigy after someone opened up fake accounts impersonating his teenage son, then used those accounts to post vulgar comments on a bulletin board and send a threatening email in the son’s name. Prodigy had already deactivated the accounts, but the family wanted monetary damages. Midway through the case, Congress passed Section 230, and Prodigy asked for the new law to be applied retroactively. But the Court of Appeals of New York, the state’s highest court, said this was unnecessary. It had no problem applying common law principles to find that Prodigy wasn’t liable. The court ruled that Prodigy was analogous to a telecom company: Just as you can’t sue AT&T when someone impersonates you over the phone, the teenager’s parents couldn’t hold Prodigy liable for someone spoofing their son over email.

It didn’t rule that a company like Prodigy could never be held liable for something involving user-generated content. But, the court said, “if circumstances could be imagined in which an ISP would be liable for consequences that flow from the opening of false accounts, they do not present themselves here.”

The Lunney ruling is like a peek into an alternate timeline in which courts did what courts are expected to do: apply familiar principles of the law to new situations and changing circumstances. This process would not have been perfect; we’ve already seen how judges can screw things up. But in the long run, there’s no reason to think the legal system couldn’t have adapted tort law to the digital world.

Companies that perform infrastructure-like functions, like today’s Cloudflare or Amazon Web Services, or that provide neutral communication technology, like email, almost certainly wouldn’t have had to worry about what kind of behavior their clients allowed. As in Lunney, traditional standards of liability and causation would have protected them. (You can’t sue Xerox for selling a copier to someone who sends you a blackmail letter; you can’t sue Comcast for providing Wi-Fi to the hacker who drained your bank account.) Meanwhile, the courts gradually would have developed a more richly textured body of law around the legal responsibilities of the platforms that directly host user-generated content.

Maybe, as Lunney suggests, the common law would have developed something similar to the immunity provided by Section 230. But courts also could have come up with rules to take into account the troubling scenarios: bad Samaritan websites that intentionally, rather than passively, host illegal or defamatory content; platforms that refuse to take down libel, threats, or revenge porn, even after being notified. They might have realized that the publisher-distributor binary doesn’t capture social media platforms and might have crafted new standards to fit the new medium. Section 230, with its broad, absolute language, prevented this timeline from unfolding.

This hypothetical scenario isn’t even all that hypothetical. The United States is the only country with a Section 230, but it’s not the only country with both a common law tradition and the internet. Canada, for example, has nothing analogous to Section 230. Its libel law, meanwhile, is more pro-­plaintiff, because it doesn’t have the strong protections of the First Amendment. Despite all that, user-­generated content is alive and well north of the border. News sites have comments sections; ecommerce sites display user reviews. Neutral providers of hosting or cloud storage are not hauled into court for selling their services to bad guys.

Yes, websites with user-generated content do have to be more careful. Jeff Elgie, the founder of Village Media, a network of local news sites in Canada, told me that the possibility of getting sued was one thing the company had to take into account when building its comments system, which combines AI with human moderation. But it’s hardly the extinction-level threat that Section 230 diehards warn about. (Elgie said that, overall, only around 5 to 10 percent of comments get blocked on Village Media sites, and only a small subset of those are for legal reasons.) It is simply not true that “the internet” relies on Section 230 for its continued existence.

In response to this observation, staunch supporters of Section 230 generally pivot. They concede that other countries have blogs and comments sections but point out that these countries haven’t produced user-generated content juggernauts like Facebook and YouTube. (Set aside China, which has a totally different legal system, a closed internet, and private companies that are more obedient to the state.) Section 230 might not be responsible for the internet’s literal existence, they say, but it is necessary for the internet as we know it.

There are a few ways to respond to this. One is that it’s hard to prove Section 230 is the reason for the success of American social media giants. The internet was invented in the US, which gave its tech sector an enormous head start. America’s biggest tech successes include corporate titans whose core businesses don’t depend on user-­generated content: Microsoft, Apple, Amazon. Tesla didn’t become the world’s most valuable car company because of Section 230.

Another response is that even if Facebook does owe its wild success to Section 230, perhaps that’s not a reason to pop champagne. The reason we’re talking about reforming tech laws in the first place is that “the internet as we know it” often seems optimized less for users than for the shareholders of the largest corporations. Section 230’s defenders may be right that without it, Facebook and Google would not be the world-devouring behemoths they are today. If the law had developed slowly, if they faced potential liability for user behavior, the impossibility of careful moderation at scale might have kept them from growing as quickly as they did and spreading as far. What would we have gotten in their place? Perhaps smaller, more differentiated platforms, an ecosystem in which more conversations took place within intentional communities rather than in a public square full of billions of people, many of them behaving like lunatics.

VII
“Reasonable Steps”

As I said, that's an alternate timeline. From the vantage point of 2021, it’s probably too late to ditch Section 230 and let the courts figure it all out from scratch. Only Congress can scrape away the decades of judicial interpretations that have attached like barnacles to the original legislation. The question is how to change the law to address its worst side effects without placing internet companies under impossible legal burdens.

There are a number of ideas on the table, ranging in concreteness from op-eds to white papers to proposed, sometimes even bipartisan, legislation. And they vary according to what problem the authors are most interested in solving.

The most sweeping piece of legislation introduced to date, a bill called the Safe Tech Act, reads like a point-by-point rebuttal to some of the most controversial judicial applications of Section 230. It would remove protections from specific categories of civil claims, including wrongful death (like in the Armslist case), cyberstalking and harassment (as in an infamous New York stalking case involving the gay dating app Grindr), and civil rights law violations (as when Craigslist was sued unsuccessfully for hosting discriminatory housing ads). The bill, proposed by Democratic senators Mark Warner, Mazie Hirono, and Amy Klobuchar, also swaps the word “speech” in for “information,” to try to refocus the law’s protection on self-expression rather than commercial transactions. When I asked Warner if all the carve-outs are a roundabout way to limit Section 230 immunity to defamation cases, he laughed and said, “Let the record reflect that Senator ­Warner made no comment in response.” (This proposed bill, incidentally, is the one that Wyden said would “devastate every part of the open internet.”)

ILLUSTRATION: ZAK TEBBAL

Another school of thought holds that the solution is to apply Section 230’s protections only to hands-off conduits for communication—things like newsletter distribution services and blogging platforms, or nonprofit-owned sites like Wikipedia or the Internet Archive that provide a neutral architecture for users to develop content. Once a platform starts to curate, amplify, or monetize user-generated content, however, some form of liability would kick in.

One of the more aggressive suggestions along these lines comes from the American Economic Liberties Project, an anti-monopoly think tank. In a statement to the Federal Communications Commission, the organization has proposed limiting Section 230’s protections to companies that make their money by literally “selling access to the internet or a computer server.”

“Airbnb is a travel company. Google is an advertising company,” says Matt Stoller, the organization’s director of research. “You should be regulated not based on whether you have a website but based on how you make money. If you make your money on travel, you should be regulated like a travel company. If you make your money from advertising, you should be regulated like a publisher.”

This proposal would pull mega-­platforms like Facebook and Google out from under Section 230’s shield almost entirely, on the theory that companies that profit by selling ads against user content should have to bear the full cost of policing that content, or else change their business model. This change, the report argues, “would also help restore a level playing field for publishers, who are legally responsible for the content they publish.”

A more modest approach would be to grant immunity only to those companies that can show they’re not abusing it. Danielle Citron has proposed amending Section 230 to make its protections from liability conditional on whether a platform “takes reasonable steps to address unlawful uses of its service that clearly create serious harm to others.” This would elegantly solve the bad Samaritan problem: A site that actively encourages people to humiliate women with revenge porn, smear enemies, or make illegal business transactions would not be able to satisfy the test.

Citron’s proposal would also open a window for the common law to step back in, striking a middle ground between full repeal and the automatic immunity companies currently enjoy. To qualify for safe harbor, a defendant would have to convince a judge that it has a reasonable approach to dealing with a given category of harm—even if it has screwed up in a particular instance. (That’s important, because as Section 230’s supporters rightly point out, no system of online content moderation at scale will ever be perfect.) Each defendant, each category of harm, would be judged on its own terms.

“A reasonable approach to sexual-­privacy invasions would be different from a reasonable approach to spam or fraud,” Citron and Franks have written. “A blog with a few postings a day and a handful of commenters is in a different position than a social network with millions of postings a day.” Instead of Congress trying to enumerate every situation in which Section 230 should and shouldn’t apply, judges would have flexibility to develop different standards for different contexts—including technologies and harms that don’t even exist yet.

The dominant platforms, like Facebook, Twitter, and YouTube, already have relatively robust policies and procedures for dealing with some types of illegal material. But a “reasonable steps” requirement would finally force them to take seriously some categories of harm that they currently get a free pass on, most notably defamation. Under the law now, platforms have no incentive to do anything about defamatory posts—and so they generally don’t. Facebook’s voluminous, searchable community standards never mention the words defamation or libel. Twitter’s do, but only to absolve itself of liability. (YouTube at least has a mechanism for reporting defamatory videos, to its credit.) This poses a particularly acute problem if the defamer is anonymous and can’t be tracked down: The victim can’t hold anyone responsible. If Section 230 were revised to impose a standard of care, these companies would have to build in some kind of process to deal with defamatory posts or else risk being sued themselves.

Now, a reality check: This would not magically fix social media. Fake news, bigotry, and your cousin Steve’s idiotic Facebook memes will generally continue to be protected by the First Amendment, as they should be. And the big platforms deserve credit for making some progress on content moderation after years of withering criticism. If they had to worry about defamation suits, however, they would finally have a strong incentive to act on at least the most extreme cases of disinformation before they become national scandals. A lot of the wildest conspiracy theories that infect American politics are straightforwardly defamatory. QAnon posts have accused specific individuals of killing and abusing children, for example. The Stop the Steal movement that ultimately led to violence at the US Capitol on January 6 was, you’ll recall, built on specific lies about Dominion and Smartmatic, ones that have landed a few cable networks in court. But because of Section 230, these individuals and companies can’t sue Facebook, Twitter, or YouTube for allowing, and perhaps helping, the bullshit to spread.

VIII
Change Is Good

Reforming Section 230 faces all the familiar obstacles to getting anything done in Congress: partisanship, bureaucratic inertia, and, of course, ferocious lobbying from an industry that is quite happy with the status quo, thank you very much.

The biggest barrier, however, may be the philosophical resistance to change—any change—among Section 230’s intellectual and legal defenders, a group that cuts across party lines and can’t be written off as industry shills.

You might think, for example, that something like Citron’s proposed “reasonableness” standard would be widely seen as a commonsense, compromise reform. In fact, even this suggestion draws fierce opposition. Eric Goldman, the influential law professor, told me it would be tantamount to repealing the entire law.

“A key part of 230’s secret sauce comes in its procedural advantages,” he said. Today, the law doesn’t just help companies defeat lawsuits; it helps them win fast, at the earliest possible step, without having to rack up legal bills on discovery, depositions, and pretrial filings. Forcing defendants to prove that they meet some standard of care would make litigation more complicated. The company would have to submit and gather evidence. That would require more attention and, most importantly, money.

Perhaps the biggest companies could handle this, Goldman said, but the burden would crush smaller upstarts. Tweaking Section 230 this way, in other words, would actually benefit monopolies while stifling competition and innovation. Faced with a deluge of defamation lawsuits, the large platforms would err on the side of caution and become horribly censorious. Smaller platforms or would-be challengers would meanwhile be obliterated by expensive legal assaults. As Ron Wyden, Section 230’s coauthor, puts it, Citron’s proposal, though “thoughtful,” would “inevitably benefit Facebook, Google and Amazon, which have the size and legal muscle to ride out any lawsuits.”

The thing about this argument is that a version of it gets trotted out to oppose absolutely any form of proposed corporate regulation. It was made against the post-recession Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010, which the conservative Heritage Foundation declares “did far more to protect billionaires and entrenched incumbent firms than it did to protect the little guy.” Federal food safety rules, fuel economy standards, campaign spending limits: Pick a regulation and a free-market advocate can explain why it kills competition and protects the already powerful.

In fact, a lot of the most passionate pro-230 discourse makes more sense when you recognize it as a species of garden-variety libertarianism—a worldview that, to caricature it only slightly, sees any government regulation as a presumptive assault on both economic efficiency and individual freedom, which in this account are pretty much the same thing to begin with. That spirit animated Section 230 when it was written, and it animates defenses of the law today. So you have Cathy Gellis, a lawyer who blogs ardently for TechDirt in support of Section 230’s immunity, filing an amicus brief in the Armslist case insisting that a post listing a gun for sale is speech that must be protected. And Goldman in The Wall Street Journal last year arguing that Amazon should not be held liable for dangerous products sold by vendors on its platform. It should “probably” try harder to protect customers, he wrote, but “any steps the company takes should be voluntary.”

In this version of laissez-faire capitalism, the best regulation is self-regulation. But today, that idea no longer receives as much automatic deference as it did in the ’90s. It is, in fact, the precise idea the techlash is lashing against.

Government intervention can and does go wrong, of course. Big corporations do indeed try to hijack the legislative process and capture regulators. In March, for example, Mark Zuckerberg publicly declared his support for reforming Section 230—with a set of proposed changes that, while vague, seem designed to allow Facebook to leave its existing policies more or less untouched.

So no, it’s not surprising that an $800 billion company facing potential new regulations would try to turn them to its advantage. But to acknowledge that fact is not to make an argument for inaction. If anything, it suggests that the danger lies in doing too little, not too much. Because for all the hypothetical little guys who might get harmed according to some economic theory, there are plenty of real, known ­little guys who are being abused by the status quo. That includes not just individuals like Kenneth Zeran and Ellen Batzel but whole segments of society. Discrimination in housing and job ads denies opportunities to Black people. Cyberstalking and harassment disproportionately drive women and members of other vulnerable groups off of social media. This is why supporters of the Safe Tech Act include organizations like the NAACP Legal Defense and Educational Fund, Muslim Advocates, Color of Change, and the National Hispanic Media Coalition.

OK, but what about the economic ­little guy? Here, too, the case for doom and gloom is thin. The George Mason economist Alex Tabarrok, himself a prominent libertarian, has found, to his surprise, that federal regulation cannot be blamed for reduced startup growth or job creation. One reason could be that while regulations and liability rules impose costs on some parts of the economy, they also open opportunities and spur innovation elsewhere. Magic happens when money’s on the line.

There is already a small market for third-party moderation software: In Ireland, a startup called CaliberAI, founded by a father-son pair of former journalists, has developed an AI system for flagging potentially defamatory comments. (The posts apparently tend to have certain linguistic hallmarks.) In the US, companies like Sentropy and Sendbird offer moderation tools for site administrators. If Section 230 were rolled back, you can bet that venture capital would rush into that sector. That would, in turn, help social media startups scale up without having to invent their own systems for dealing with illegal user content from scratch. Sid Suri, Sendbird’s head of marketing, predicts that companies like his would shift engineers to spend more time on moderation products—because that’s where more of the money would be. “An ecosystem will always develop around the need,” he says. Imagine: companies getting rich by making the internet less toxic.

It’s important to keep in mind that, even without Section 230’s blanket immunity, companies would not be forced to go to trial every time someone gets Mad On the Internet. It’s already incredibly difficult to sue corporations in America. It’s really hard to win a defamation action. Companies have many ways to toss out weak cases besides Section 230. Remember the Stratton Oakmont case? The one where Prodigy was being sued for $200 million for hosting a message that called an investment firm’s leaders a bunch of liars and crooks? In the end, Prodigy never paid Stratton Oakmont a cent. After losing the preliminary ruling, Prodigy announced that it would raise a truth defense. (It ain’t defamation if it’s true.) A few months later, Stratton Oakmont agreed to drop the case in exchange for an apology. In hindsight, this is anything but shocking: Stratton Oakmont is the company featured in The Wolf of Wall Street. Its founders, Jordan Belfort and Daniel Porush, were sent to prison in 1999 for securities fraud and money laundering. They really were liars and crooks.

Sometimes companies make money by doing bad things. Other times, companies merely allow bad things to happen, because it’s cheaper than preventing them. Either way, the basic premise of tort law is that when someone is responsible for a bad thing, they should have to pay, or at least make it stop. You can think of this as a form of justice: forcing wrongdoers to make their victims whole. Or you can think of it as a form of deterrence, in which the point is to prevent bad behavior. But civil liability, especially corporate liability, can also be understood in economic terms: a question of who should have to pay the costs that certain activities impose on everyone else.

The reality of Section 230 is that it has allowed digital platforms to externalize some of the costs of their business models. Other industries generally don’t get to do this. Chemical companies can be sued if they poison the local water supply. Retailers can be sued for putting defective items on their shelves. Working to prevent these outcomes costs corporations money. But it doesn’t stop them from making Teflon pans or stocking Tostitos. Once the creation myths and apocalyptic talk are set aside, it’s clear that the online world needn’t be so completely different—and that an intelligent reform of Section 230 won’t stop digital platforms from creating the internet.

Section illustrations by Elena Lacey. Source: Getty Images.


This article appears in the May issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


More Great WIRED Stories