How WeChat Spreads Rumors, Reaffirms Bias, and Helped Elect Trump

China’s most popular messaging app has become the perfect tool for swaying public opinion.
Illustration by LiAnne Dias
Li-Anne Dias

Six months before Donald Trump won the United States election, Chinese-American blogger Xie Bin and seven others launched a WeChat page aimed at influencing Chinese-Americans to vote for Trump. They called it “The Chinese Voice of America” (CVA), and published several articles each week that drew from right-wing websites in English, as well as concerns people shared in Mandarin in WeChat groups. Headlines included: “Why I Will Vote for Trump: The Issue of Illegal Immigrants Must Be Resolved!” and “Obama publicly encourages illegal immigrants to vote in the election; Virginia paroled 60,000 critics to participate in the election!” CVA was intended to be an experiment, says Xie: “We said we could try it.” Within months, it had more than 32,000 followers on WeChat. Even more people shared its content in private WeChat groups and commented about it on Chinese-language websites that center around WeChat content.

In November, CVA’s candidate won the presidency of the United States. Though Clinton won the Asian-American vote overall, some polls report Trump did better among Asian-American voters, including Chinese Americans, than his Republican predecessor did in the last election. And anecdotal evidence suggests that Trump supporters were, at the very least, more vocal and impassioned—both online and off.

As debates raged over the role that fake news shared on social networks played in Trump’s election, WeChat barely registered in the conversation. Instead, concern mostly centered around social networks with large American audiences, especially Facebook. But CVA is evidence that fake news isn’t solely a Facebook problem. “When you look at the whole spectrum of mis- and disinformation…the scale and the complexity of the problem is clear,” says Claire Wardle, a fake news expert at First Draft News. “In order to start thinking about ways to solve information pollution crisis…we need to understand the different types of misinformation…and the platforms upon which it is being disseminated.”

With social media activity increasingly moving to private communities on WeChat, Telegram, Signal, and WhatsApp, where information is harder to track and verify, understanding how news — and trust — flows on these closed networks is more important than ever.

Though CVA’s influence — or even that of WeChat, central to its rise — was minuscule on its own, and its tactics basic (especially in comparison to the more sophisticated fake news sites running out of Macedonia, bots that artificially amplified popularity, and potential Russian collusion in the election), CVA provides an important and overlooked lessons: The fight to sway public opinion in an election does not necessarily require high tech. It just requires an understanding of how to reach the right audiences, on the right platforms, with tailored and timely messages.

WeChat is the dominant messaging app for the world’s most populous country. By the end of 2016, it boasted 889 million monthly active users around the world, a leap of 35 percent in a year. WeChat’s influence is increasingly global, with over 100 million users — more than a tenth—living outside of China, by owner Tencent’s own measurement.

Created in 2010, WeChat was designed as a mobile messaging app for individuals and small groups. Often compared to Facebook and Whatsapp, it combines the profile features of the first and the mobile messaging of the second, with a higher emphasis on privacy — all personal profiles and both individual and group chats are visible by invitation only.

WeChat’s fast growth comes from the proliferation of group chats, according to Matthew Brennan of the China Channel. In the beginning, these groups were limited to 40 members, expanding over time to 100 and then 500. These small group sizes worked well for their purpose: They were intimate virtual meeting rooms for friends and acquaintances that typically knew each other offline as well as on, creating a built-in system of social vouching. In general, “people feel comfortable having private, small-group conversations,” says Dr. Fu King-Wa, a Hong Kong University journalism school professor and fellow at MIT Media Lab who observes Chinese digital media closely. He adds, “WeChat is basically a small group conversation.”

WeChat groups have become significant for political activism. In a country without the right to free assembly, the digital gathering spaces of WeChat groups offer a welcome alternative. “WeChat has come in at a perfect timing as a revolutionary tool,” said Shue Haipei, the founder of the National Council of Chinese-Americans and an early adopter of WeChat in America. “[It] revolutionized how to form a group.”

Most political conversations happen in smaller private groups, where they are less easily tracked. WeChat offers businesses and other groups “official accounts;” once WeChat approves one of these accounts, any WeChat user can see it. CVA took the unusual step of becoming an “official account,” where it could gain more followers more quickly. It’s the equivalent of standing in a digital town square, shouting out to anyone who can hear over a megaphone. “If I publish on WeChat I can get thousands of hits,” says Xie. “If readers see something of their topic [of interest], they are going to spread it quickly to all their groups.” In a system designed to discourage too much influence, WeChat groups allow for a limited form of virality — albeit one that is hard to track.

But as the platform has become more popular, and group sizes themselves have grown, the reliability of its system of trust based on personal vouching has decreased. Group members’ online connections and interactions are no longer reinforced by offline relationships, making it harder to judge the dependability of any individual. Still, the illusion of closeness fostered by WeChat groups has persisted. It is this perceived credibility, combined with a chat format that does not visually distinguish messages from individuals, official accounts, and group chats, that has made WeChat so rife for the spread of fake news.

“There’s a phrase in Chinese,” Zhang Yilin told me in late November 2016, “that translates to, ‘To be a good person, don’t be too CNN.’ ” An Environmental Protection Agency employee from Pennsylvania, Zhang voted reluctantly for Clinton because of their shared viewpoints on climate change. She didn’t rely on TV news to make her decision, explaining that CNN had gained a reputation for being “fake” and inventing evidence through its critical coverage of the 2008 Beijing Olympics.

The neologism speaks to a distrust of media that dates back even further. In China, the state media is widely known to tell only the official version of the story, as determined by the Chinese Communist Party. As a result, all news affiliated with official institutions is treated with skepticism.

In such a context of distrust, social media has stepped in to fill a void. But though opinions proffered on WeChat provide a source of information unfiltered through the lens of the state, the same guarantee cannot be made about the biases of the individual doing the sharing.

That’s where Xie saw an opportunity. By identifying and echoing his readers’ biases, he could gain a following and support for his candidate. “Sometimes I don’t put enough facts…but if I write articles more than 2,000 words, then fewer people read [them],” he says.

Xie shared one example of how he caters to his readers’ existing mindsets: a blog post published in September headlined, “Banning pork has quietly begun across the United States.” Similar claims had made the rounds of the conservative sites since mid-August, and already been debunked by Snopes. But for Chinese-Americans seeing this for the first time via CVA, the blog post touched a nerve, as pork has been culturally significant to Chinese cuisine for thousands of years. More recently, its increased affordability has been welcomed as a symbol of China’s economic reemergence. Thus, to read that pork had been banned to account for Muslim sensitivities (as the article continued) was a personal affront — and just another sign of Chinese-American discrimination by the mainstream.

That the CVA’s claim had been debunked elsewhere was not readily apparent to readers. In general, WeChat’s design does not make it easy to fight biases or fake news. Information on the platform spreads quickly within and between WeChat groups, but the sources of information — and therefore their verifiability — are de-emphasized, to the extent that sources are almost completely ignored. As a result, credibility defaults to whomever shared the information last, and whether he or she can be believed.

The litmus test for truthfulness has moved from, “is this argument supported by evidence?” to, “is this argument shared by someone whose judgment I trust?”

In the wake of the US elections last fall, as Facebook raced to tackle its fake news problem, Tencent CEO Pony Ma addressed the topic at China’s World Internet Conference. The state-sponsored event gathered executives from IBM and Microsoft alongside Chinese internet execs and state representatives A huge theme was “internet sovereignty”—the idea that internet governance should be left up to individual countries to decide.

Ma used the occasion to address the spread of internet falsehoods, offering up the strongest statement to come from any social media company: *“*Fake news spreading in US social media, which played a part in Trump’s victory, has sent an alarm to the international community.” He contrasted the implied lack of action by US social platforms with the efforts of his company: “Tencent has always been very strict in cracking down on fake news and we see it as very necessary.” Though Ma’s implication was that Tencent had been leading in a global fight to which the rest of the world was only now awakening, his remarks had a darker undertone, reaffirming the need for the internet surveillance and censorship practiced by the Chinese state.

Fake news in China must be understood “in the context of media control,” says Hong Kong University’s Dr. Fu. “It is usually defined by the authorities, but it’s really hard to identify if it’s really fake, or if it’s inconvenient.”

Lotus Ruan, who is part of the University of Toronto Citizen Lab team that studies censorship on WeChat, adds: “I personally would be cautious of the notion and discourse of ‘fake news’ because it can be used to crack down on dissident voices or discredit opinions that confront those in power.” In other words, fake news is whatever the authorities claim it is.

Those authorities include WeChat, because Chinese internet law holds the platforms themselves responsible for false information, requiring them to self-censor to stay in business. WeChat invites users to help by flagging false news stories and reporting either individual profiles, official accounts, or groups—but the company makes the final call on truthfulness.

Last year, WeChat disabled more than 1.2 million links, deleted over 200,000 articles of alleged false information, and fined 100,000 accounts that either created or spread rumors, according to the Cyber Administration of China. For more serious falsehoods, there are legal consequences: In 2013, China’s top court ruled that if content is viewed by 5,000 internet users or reposted 500 times, the originator can be charged with defamation and jailed for up to seven years.

Not all fake news on the platform is removed for political reasons. Many stories relating to food safety, health, and consumer scams are legitimately fake and dangerous to the public. To deal with these, in 2014, WeChat launched “Rumor Filter,” a Chinese-language official account that debunks rumors on a weekly basis. As with all accounts on WeChat, public and private, Rumor Filter’s reach is limited, by design. To benefit, users must actively seek it out and follow it before receiving updates; most never do.

Even when fake news is removed for legitimate reasons, the ultimate arbiter of truth is a company reflecting the interest of the state. And the reasons for its removal are rarely made clear. Says Citizen Lab’s Ruan, “the company…need[s] to have a better disclosure policy, better transparency in how they determine what is ‘fake news’, [and] what measure was taken to take them down.”

These days, the “Chinese Voice of America” is still active, though Xie has taken a break from both writing and politics. Even without him, CVA continues to publish multiple articles a week, providing conservative and/or pro-Trump commentary on the news and shaping the Chinese-American zeitgeist.

Without Xie, CVA has taken on new contributors, and some posts now contain a disclaimer about representing only the individual authors’ opinions rather than CVA as a whole, though these caveated articles are posted anonymously. Additionally, CVA recently started accepting “tips”— voluntary micro-payments for articles that readers enjoyed.

In other words, the official account that started out as an experiment in grassroots electoral organizing is professionalizing — just not toward the accepted standards of unbiased journalism. Rather, CVA is developing to meet the needs and norms of a closed network shaped by media control, where information travels like a game of telephone with each jump further obfuscating the original source.

The election may be over, but the spread of influence only continues to amplify on WeChat, both inside China and among Chinese speakers in other countries. As it grows, one of the main challenges that WeChat and other closed networks will face is the difficulty of verifying information in a system that does not value verification.

In the meantime, trusted friends — or official accounts like CVA — step in. But when credibility comes from the sharer rather than the source, the danger of social media turning into an echo chamber moves from a potential to the norm.