Skip to content
President Donald Trump delivers remarks about American energy production during a visit to the Double Eagle Energy Oil Rig, Wednesday, July 29, 2020, in Midland, Texas. (AP Photo/Tony Gutierrez)
President Donald Trump delivers remarks about American energy production during a visit to the Double Eagle Energy Oil Rig, Wednesday, July 29, 2020, in Midland, Texas. (AP Photo/Tony Gutierrez)
Author
PUBLISHED: | UPDATED:

Emmy nominations came out this past week. It’s a shame the performances of our elected officials and tech barons in Washington were not considered.

President Trump, as well as Senate and House committees, turned their attention to internet regulation, creating a spectacle that was mostly misguided and incapable of resolving any actual concerns about the growing power of big-tech firms raise in our democracy.

Trump announced he was taking on regulating social media again. As is often the case, he has recognized a legitimate concern, but seems only capable of making it worse.

Yes, big tech firms have amassed concerning amounts of power over the flow of information and our private data and displayed an inability to protect the pathways of democratic discourse from harm.

His solution, which uses a sledgehammer where the precision of a surgeon’s scalpel is needed, is to remove their legal protections. This will not solve the problem, but it will create more.

The president asked the FCC to “adopt rules clarifying Section 230.” The order sounds innocuous. Section 230, however, has been referred to as the law that “created the internet.” The protection it has afforded fledgling companies has allowed the internet to grow into a vital engine for social and economic exchanges.

Section 230 is the part of the Communications Decency Act that makes internet service providers and online forums, such as Facebook and YouTube, immune from liability for how people use their services. When Section 230 is coupled with the First Amendment, these big-tech firms have incredible freedom to create and manage the online spaces we use each day.

They have the power to suspend accounts, as Twitter did to Donald Trump Jr. on Tuesday. They have the power to label content as untrue, as Twitter did to the president’s tweets about mail-in voting earlier this summer. They can also simply take content down. Facebook removed conspiracy-focused videos about COVID-19 this week.

Suspending, removing and qualifying information is not the main problem, though it appears to be driving the president’s calls for regulation. Section 230 is also not at the heart of the issue, though a Senate subcommittee conducted hearings about revising the passage Tuesday.

The real problem is big-tech firms such as Facebook and Google have created economic models that benefit from maximizing activity on their sites, even when that activity fundamentally damages democracy. One report found Facebook earns about $7 annually per user. Any change that leads to fewer users will lead to smaller profits.

These economic motives were part of the spectacle Wednesday, when CEOs from Amazon, Apple, Google and Facebook testified about their economic models by often offering word salads to members of Congress. Our elected officials played their parts, often showing little understanding of the tech firms or their services.

When word salads seemed to fall on deaf ears, the tech leaders tried to pivot and talk about the American values their firms spread and the competitive edge they would forfeit to China if their businesses were saddled with punitive regulations. The only mark missing from my hearing bingo card was an honest recognition of their motives and the effects of their business models on democratic discourse.

Misinformation and disinformation drive significant amounts of social media traffic and — though big-tech firms have taken steps to at times curb some of this content — they have been loath to fundamentally address the harm their services create. Why? When your company makes $1.7 billion a quarter, why change the product?

Yet, as November’s elections approach, our information ecosystems are awash in misinformation and disinformation. That’s why the regulation conversation we should be having is about incentivizing ways for these firms to redesign their products so they do less harm to democracy.

The challenge, and need for a scalpel rather than a sledgehammer, is underscored by the reality that both the president’s concerns about being blocked, censored or qualified on social media — and those about regulations to protect democratic discourse — run into the same legal limitations. It is unlikely any change the FCC makes will survive in court. The First Amendment protects human and corporate speech with nearly the same zeal. This means the government cannot compel these firms to leave certain content online or remove it. At the same time, courts have continued to interpret Section 230 as protecting these firms from liability for how their services are used.

So what is to be done? My friend Dipayan Ghosh, a former Facebook employee and White House policymaker, and I argued for industry self-regulation earlier this summer. The idea would be, if the industry policed itself, we could avoid this messy process of attempting to regulate social media.

Another approach would be to tax social media firms’ profits. The money would be used to repair the damage their products are doing. They could fund media literacy programs in schools and support fact-checking organizations.

Whatever the solution, we must get to the root of the actual problem. We need fewer performances and face-palm-inducing spectacles like we had this week in Washington and more candid and carefully studied conversations about the dangers social media firms’ products pose to democratic discourse.

Jared Schroeder is an associate professor of journalism at Southern Methodist University, where he specializes in First Amendment law. He is the author of “The Press Clause and Digital Technology’s Fourth Wave.”