Future Tense

Social Media Platforms Need Hotlines to Report Harassment

A.I. alone can’t address coordinated campaigns, particularly against people from marginalized groups.

Photo collage of a man at his laptop while complaining on the phone, and the Twitter, Facebook, and YouTube logos.
Photo illustration by Slate. Photo by Jose Luis Pelaez Inc/DigitalVision via Getty Images.

In late 2016, Andrew Anglin, founder of the white supremacist outlet the Daily Stormer, made a call to arms for trolls and supporters. He asked anti-Semites far and wide to harass Tanya Gersh and her family, after white nationalist Richard Spencer’s mother accused Gersh of extortion in a real estate deal. “Tell them you are sickened by their Jew agenda,” he said. “Her Twitter … doesn’t appear to be active—but her son has active accounts. You can hit him up.” This week, a federal judge in Montana recommended that Anglin be sentenced to pay $14 million in damages to Gersh for the harm the trolling campaign inflicted on her and her loved ones.

This is a rare instance of a positive resolution to a coordinated online harassment and hate speech campaign, but it didn’t come from the platforms. They are great at protecting the harassers. Online attacks against individuals, journalists, activists, and minority group leaders reveal major shortcomings in the ways that social media platforms deal with coordinated and politically motivated harassment. The platforms’ designs give inordinate power to those seeking to intimidate already marginalized groups through coordinated acts of hate and harassment.

There are two central levers to manipulate democracies online: disinformation and coordinated harassment. While much has been written about Russian advertisements, automated (bot) accounts, and “fake news,” coordinated harassment campaigns are not discussed enough. The fear induced by death threats and rape threats silences not only the named target, but also their communities. A seemingly innocuous comment on Twitter or Facebook can trigger an orchestrated troll assault on the individual, their friends, and family members. The fear of being attacked by a coordinated hate mob keeps many people from engaging in online discussion, which is exactly the kind of chilling effect these campaigns are engineered to produce. This chilling effect also disproportionately affects marginalized communities and dissenting voices—whose points of view are invaluable in shaping a robust democracy.

Recent research from our Digital Intelligence Lab at the Institute for the Future reveals the depth of the problem. Combining data analyses with ethnographic interviews of targeted individuals, we studied the human impact of disinformation and harassment on marginalized groups during the 2018 U.S. elections. We found ample evidence of coordinated harassment campaigns censoring journalists, activists, and politicians. We also found that these groups have lost faith that platforms will take measures to inhibit or mitigate such campaigns. One woman, a former political candidate interviewed for the Jewish American case study, faced tens of thousands of graphic hate messages after she was doxed by a white supremacist leader. Her Facebook, Instagram, Twitter, and cellphone were inundated with threats of rape and death, as well as photoshopped anti-Semitic images containing her face. She told us that, beyond the simple stock complaint form on most social media sites, there was no clear avenue for her to work with the technology firms hosting the content to stem the flow of hate. Finally, she had some success removing harassment on Instagram, but only when she had a friend who worked there put her in touch with someone who could help.

Whole social groups are seriously affected by platforms’ unwillingness to address coordinated hate attacks in a meaningful way. In our case study on Muslim Americans, for instance, the hashtag of the Council on American-Islamic Relations, #CAIR, was co-opted by Islamophobes, drowning out conversations about the core issues of constructive intergroup connectivity promoted by the group. A CAIR chapter director said that it was difficult to report the content because “there [was] just so much, it can’t be a full-time job.”

The few targeted people we interviewed who managed to get the attention of representatives of the social media platforms were told to (1) change their profiles to “private,” which fulfills the trolls’ intentions of self-censorship; (2) individually report or delete each piece of offending content, which can be retraumatizing or overwhelming due to sheer volume; or (3) request help from local law enforcement. These solutions are inadequate. While both Twitter and Facebook have made strides in proactively removing abusive content before it is reported by a user, often with the assistance of artificial intelligence, Twitter has proactively taken down only 38 percent of abusive content that is eventually removed, and Facebook only about 14 percent.

Instead of depending on A.I., though, social media platforms can take human-centered actions to reduce the asymmetrical advantage of coordinated harassment campaigns. One action that could be particularly useful would be the creation of human-run “help line” teams that can be accessed by individuals and groups experiencing coordinated hatred. While the help lines would be publicly available, members of the help line teams would have direct, on-the-ground relationships with designated individuals from frequently targeted groups, including activists, journalists, academics, and members of civil society. Targeted individuals and groups could work directly with these focused teams at social media platforms during harassment campaigns to quickly remove content, reducing momentum early on and limiting impact. Doing so would reduce the asymmetrical power of those seeking to intimidate and silence.

A help line team like this would require at least four components:

• Clear and public definitions of “coordinated harassment campaigns.” Perhaps this would include campaigns that focus specifically on attacking particular minority groups. It could also examine threats of violence or disenfranchisement efforts against protected groups. But it must also zero in on tactics including doxing. Twitter, for instance, now allows for greater specificity in reporting posts and users that share private information.

• Outreach and relationship development between the help line team and marginalized networks. This should occur before attacks happen. The burden of making and maintaining these connections should be upon social media firms, not under-resourced civil society or community groups. This type of communication will enable culturally informed knowledge of the issues faced by marginalized communities and will enable greater awareness of how to report coordinated harassment campaigns.

• Clear and consistent advertisement of the help line contact information during events that have historically led to coordinated harassment campaigns, such as elections, mass shootings, and natural disasters.

• Specialized transparency reports on coordinated harassment campaigns that indicate the number of coordinated harassment campaigns reported, the number assessed as legitimate, and the amount of time taken to remove harassing messages. While Facebook, Instagram, and Twitter have transparency reports, they do not specifically discuss metrics of reports upon (and success against) coordinated harassment campaigns.

While social media platforms may be reluctant to invest the resources necessary to establish culturally informed help lines composed of human beings with real connections to the communities they serve, they must face the reality that they explicitly enable coordinated harassment. Creating connections with the marginalized communities will not just enable faster responses to doxing campaigns and the like; it will also rebuild trust.

With 2020 just around the corner, the time to act is now. The establishment of a human help line charged with countering coordinated harassment, with connections to the actual communities being targeted, will have an outsize impact on democratic discourse. Supporting the people and communities most likely—and most usually—attacked during pivotal political events will have a cascade effect in building a stronger global democracy, safer social media platforms, and more equitable online spaces. To rephrase this in a way that speaks to the platforms and their investors—it will be a “10x return on investment.”

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.