clock menu more-arrow no yes mobile

Filed under:

How We Built Our Bubble

And what the social media echo chambers of our own design meant for the 2016 election

Ringer illustration
Ringer illustration

Facebook’s “unfollow” function changed everything. The tool, first announced in 2013, gave users the ability to quietly unsubscribe from a friend’s feed without said person knowing. To those friends, everything was the same: You simply didn’t comment on their political rants or like their superfluous baby photos anymore. For all they knew, you were seeing them — except you weren’t. “Unfollow” was a cloak, hiding undesirable content while allowing people to stay friends on Facebook — the dreaded “defriend” is so final and so cruel that it has obvious real-life implications — without dealing with any compelling differences.

I started unfollowing people for the myriad reasons everyone does: They posted too often, spammed Facebook with links to online savings or games, shared altogether too many photos of their kids, and insisted on apprising the world of how much they loved their significant other. Sometimes I used it on people who made me miss home too much or posted one too many ViralNova links. But mostly I used it to block those who promoted the hateful rhetoric in Donald Trump’s presidential campaign.

The mute function on Twitter works in the same way; these are symbolic gestures that remove someone’s content from your view. You’ll never have to have a conversation about why you defriended someone or unfollowed them. There will be no discussion about your differing ideologies — they can’t even explain to you why they disagree. There will be no meeting ground or understanding, and no one will come away from it wiser, better, or changed. It is a closing of a loop.

Instead, we construct ever-narrower networks within these platforms, composed of people who think and talk and post like us. The result, of course, is a safe sounding board of people who think and act and vote and say things like you. The things that make us feel good, and correct.

And then, when something like last Tuesday’s election result happens, that curated reality comes crashing down. A Trump-free Facebook does not mean there’s a Trump-free America.

The internet echo chamber is not a new phenomenon; researchers have been studying it for years. “These patterns have concerned many theorists of democracy, who have argued that exposure to a diverse range of viewpoints is crucial for developing well informed citizens … who are also tolerant to the ideas of others,” writes the University of Oxford’s Jonathan Bright in his paper “Explaining the emergence of echo chambers on social media: the role of ideology and extremism.” “By contrast, exposure to only like-minded voices may contribute towards polarization towards ideological extremes.”

These extremes segregate and feed themselves. Facebook allows us to dig in our heels. Now, after the most divisive election in centuries, experts are worried we won’t be able to get ourselves out of this cycle — that the level of discourse happening on social media is, in the most internet-friendly terms, a dumpster fire from which there’s no escaping.

Echo chambers existed long before this election. Before Facebook or the internet, your community or town or school or family could be one.

Social media, though, has made the discussion larger and smaller all at once. Yes, you can use it to connect with nearly anyone from anywhere, allowing you to build a much larger network than ever before. There is a host of tools and algorithms at play to make sure that your experience is a positive one. Some of them are built into the infrastructure of a platform — Twitter’s suggest users to follow; Facebook’s tend to show things you’ll like. (Remember, Facebook logs your political party affiliation.) The basics of the algorithms are simple: What you look at and like is logged, then more of the same is fed to you. There are other influencers, of course — simply check out the ad preferences Facebook has for you. But essentially, it comes down to “more of the same, please.”

Unfollowing and muting are tools these networks made, but for good reasons — in part, they curb the harassment many of us face online. How much we rely on them is a personal choice. Wiping your internet experience clean of hateful speech is a wise self-preservation move, but it also provides a false sense of consensus.

Seeing every corner of, say, an election can help voters realize how to optimize their role. This is called “strategic voting” or “tactical voting” — a recent study examined how it operates in social networks. In strategic voting, you start out with the candidate you want — maybe Bernie Sanders — but as time goes on, and Facebook reveals how other people are going to vote, you may amend your vote to Hillary Clinton to deliver the best possible outcome. It becomes considerably harder to do that when we self-censor or filter out opinions. This is not a stunning revelation: The more information one has, the more educated one’s decision can be. But the fallacy of social networks is that we do have more information available — we can see the opinions of more people than ever before. So we believe that our votes are as strategic as they should be. In fact, we’re wrong.

Alan Tsang and Kate Larson from the Cheriton School of Computer Science at the University of Waterloo in Canada examined how strategic voting can work inside a social media platform. The researchers work in multi-agent systems, a subfield of artificial intelligence that looks at how groups coordinate and work together. In their study, Tsang and Larson researched social choice theory. “The most direct application, and certainly the one that is on our minds now, is electing a president,” Tsang told me. “We looked at when a group of people choose an outcome when they all have expressed different preferences for that outcome.”

Tsang and Larson created a simulated social network with multiple rounds of voting. At the end of each round, all of the users showed their ballots. This would cause voters to change their vote as the rounds went on, so that they could get the best possible outcome. It’s game theory applied to Facebook and elections. “Let’s say you really like the Pirate Party, but the Pirate Party, you realize, is not going to win the election, so your decision is ‘Do I vote for them anyway with the understanding that it’s unlikely they will win?’” Tsang explained. “Probably not, so your decision is ‘Do I stick with them or vote for my second choice?’”

I posed a hypothetical for Tsang. Let’s say a voter is on Facebook, and 90 percent of her feed, in the dwindling months of the election, is pro-Hillary. She has unfollowed Trump supporters because the KKK endorsed him. Let’s say she was thinking of voting for Green Party candidate Jill Stein, or was going to vote for Hillary but didn’t feel a need to compel others to. Or maybe she just didn’t vote. Following the logic of their study, it looks quite safe for her to do one of those things because her least favorable outcome — Trump — seems highly unlikely inside the online world she’s constructed. Tsang says that’s how it could work. (This might help explain your third-party-voting friends who seemed to think they were safe from a Trump win.)

“The phenomena is something called network homophily, and it’s not specific to Facebook,” Tsang said. “But people naturally do this. You make friends with people who are similar to you. Common interests, backgrounds, political interests.”

Facebook not only supports homophily, it encourages it.

“Facebook can accelerate this,” he said. “I can only speculate, but certainly Facebook and electronic communication in general accelerates the rate at which we connect with people of our choosing, and that might make this happen more or faster.” He pointed out that it can help us connect with people who think differently too. Human nature tends to get in the way of that, though.

“The system we’re looking at allows people to revise their ballots. We’re looking at a situation where you can gather information and therefore change your mind,” Tsang said. “You want agents to overcome the homophily in the network and get information from people who do not agree with them.”

OK, so how? The easiest answer would be to add many people from different places and backgrounds and who have political opinions that are different from yours. That way, your Twitter and Facebook feeds would become a cornucopia of information, offering a far more accurate depiction of how people in the country actually think. But that’s not in the platforms’ best interests, and it isn’t easy. You’d have better luck finding diversity of opinion on Facebook if you were to simply friend 10 strangers rather than let the platform’s tools guide you to new friends. Both Facebook and Twitter operate largely off your preferences and activity; the deductions in whom to recommend come from information the service has already gleaned.

But there is a larger problem than finding a way to connect Hillary-supporting West Coasters with Trump-backing Midwesterners. The level of discourse online may simply be doomed.

“My concern is that I don’t think you can deliberate with people who are writing on social media. I don’t think it’s going to be a productive space to try and engage,” said Shawn Parry-Giles, who teaches rhetoric and politics at the University of Maryland and is the author of Hillary Clinton in the News: Gender and Authenticity in American Politics. “I think all sides need to be ready to come together. Maybe there need to be new platforms for people who really do want to try to come together. It isn’t happening in Congress and it isn’t happening on Facebook. I just don’t think people have seen a positive deliberative space for a long time, and it’s now really hard. I don’t know if it can happen right now.”

I asked Parry-Giles if this election might shake us up, though, and help us unlearn some of our echo chamber tendencies. “I think it might go the other way and get worse,” she said. “A lot of people are just unfriending people or just not accessing that [side of the conversation]. They are going to undo so much of the progress of the Obama administration and it’s hard to have deliberation when you’re scared that you are going to lose your rights and protections. I think this is the end of naivete.”

Parry-Giles said she doesn’t feel that she’s able to engage on Facebook. “Psychologically, for me to read all of this hateful stuff … I just can’t.” I know what she means. Each Facebook post or tweet I’ve seen telling Clinton supporters to “get over it” or denying that hate crimes are happening at an increased rate as a result of the election is painful. It can feel impossible to have an intelligent discussion with their authors via Facebook or Twitter.

“In some ways, if you just consume that hyper-negativity it could make it even more divisive,” Parry-Giles says, “because there are people out there who aren’t that extreme, that you can have greater conversations with.” She tells me about a former student, a moderate Republican who worked for Sarah Palin and Mitt Romney, whom Parry-Giles saw again at a recent panel talk. (Despite the student’s conservative leanings, she voted for Clinton.) After the talk, a group of Republican students approached her, desperately wanting to continue the conversation. “They hadn’t had an opportunity to speak to someone who was … normal,” Parry-Giles says. “Their political views were being forgotten, and there are people like them, but you just see the ones being provocative and incendiary.”

The internet’s hate speech problem is not a new one, but it’s become so overwhelming that it’s caused some people — those best suited to intelligently and positively engage in political discussions — to hold back. The pointlessness of conversations in such an environment is defeating. It’s important, Parry-Giles says, for us to relearn our behaviors. “In the college community, we have to show up and help students learn how to deliberate, and I think we’ve lost that commitment to deliberation,” she says.

“I don’t think we know how to talk to people who don’t agree with us anymore.”

There is yet another kink in this complicated online reality: fake news. Last week at the Techonomy conference, Facebook CEO Mark Zuckerberg said, “The idea that fake news on Facebook influenced the election … is a pretty crazy idea.” It seems that hardly anyone agrees with that statement, including Parry-Giles. “I think there were very high levels of people posting incorrect information, and … it’s irresponsible for him to say it has no effect. Maybe it hardens positions already out there … social media has magnified it and made it harder for people to separate what’s real and what’s not real. The repetition starts to make it true.” (Facebook and Twitter both declined to speak on the record for this story.)

According to a new Gizmodo report, Facebook is having an internal conversation about how it deals with partisan news and fake news. And a recent New York Times story explored how Facebook sees itself as counter to how it’s actually used as a news source. A former Facebook employee wrote this week that the platform is grossly underestimating its fake news problem, though he “trust[s]” it’s something they will work on.

So not only do users need to compete with the echo chambers and the most divisive voices that are also often the loudest, but they also have to work harder to differentiate between real and fake stories. While Facebook has spent much of the last week defending itself, the company’s dismissal of its power as a news influencer is hard to justify when the social network sees itself as an agent of political change in a more flattering scenario — like the Arab Spring — but feels no accountability in this election. Beyond fake news, regular old news still has echo chamber problems of its own, and social media’s position as a major news distributor is essential to understanding that. Tsang and Larson’s study on social media and strategic voting identifies that news outlets tend to have a problem similar to that of social media users: “[Another study] examined the link relationships between sites of top conservative and liberal bloggers discussing political issues, and found … sites were much more likely to discuss and reference each other when they shared political views.”

This all paints a bleak picture: There are more people using social media to communicate at a higher level of engagement than ever before. It’s increasingly a primary news source for many of its users. So when the level of discourse has become so low that those most willing and informed to have discussions don’t, and when the content being passed around it is at worst fake and at best subject to homophily, the echo chamber seems unbreakable. What do we do?

Jonathan Bright, the research fellow and Oxford Internet Institute author, recognizes that there is a serious problem, but also says there’s hope to be found in our face-to-face interactions and social media’s ability to broaden our networks. “In your offline life (if we can call it that anymore), there are some structural constraints about who you associate with,” he told me in an email. “Who you work with, where you live, who your children meet at school … you can’t change this stuff at a click of a button.”

It’s easy to dismiss who and what we don’t like online, and harder in real life; Bright said that in some cases, in real life we have more diversity around us.

“If you look at the rise of ISIS, for example, which has been successful at recruiting people from all over Europe, many would argue that that type of ideology would have a harder time spreading to that type of person in the pre-internet age.”

Bright says that keeping in touch with “weak ties” — distant friends or people you’ve met only once or twice — can shatter the echo chamber, provided you don’t silence them. “I am generally positive about the technology in terms of potentially allowing more exposure to diverse views than you might get offline.”

He also suggested that last week’s low voter turnout could indicate the opposite of a filter bubble: “At a large scale this does not seem to indicate a country where everyone is firmly locked in an echo chamber and convinced of the righteousness of one candidate. So it may be that we don’t want to completely destroy echo chambers and that society needs people who are a little bit myopic, because they are the ones who act.”

Another study about emotional messaging in social media posts supports this theory. “In our study, members’ experience of betrayal and anger were intensified and amplified by the fact they were surrounded by others encouraging these emotions,” said Madeline Toubiana, one of the study’s authors and an assistant professor in the School of Business at the University of Alberta. This sort of reaction can motivate people into offline activities, like protesting or rallying for their cause. “The benefit of this, in our study, was that a group of individuals that previously had limited means to influence organizational decision-making were able to mobilize and have their voices and concerns heard.”

Bright agrees that there is something damaging our concept of offline reality. “There isn’t going to be an easy fix,” he said. “Social media network owners are very cautious about changing key mechanisms because it doesn’t take much for people to abandon platforms. … The fix for this is at a deeper level: more education, more university-level education where people are encouraged to criticize rather than just repeat.”

Internet platforms certainly have a responsibility to try to police hate speech, and they will have to grapple with their positions as they apply to policing fake content. And there is also an onus on us to stay engaged, even when it’s so much more pleasant to mute and unfollow. Perhaps most of all what we have learned — and will continue learning over the next four years — is that as easy as it might be to manipulate our online experiences, having the harder offline conversations matters.