Skip to Content
Opinion

What’s missing from corporate statements on racial injustice? The real cause of racism.

An analysis of 63 recent statements shows that US tech companies repeatedly placed responsibility for racial injustice on Black people.
September 5, 2020
Sea Mist, Galleyhead lighthouse,West Cork, Ireland
Getty

On August 31, Airbnb launched Project Lighthouse, an initiative meant to “uncover, measure, and overcome discrimination” on the home-sharing platform. According to the company, Project Lighthouse will identify discrimination by measuring whether a renter’s perceived race correlates with differences in the rate or quality of that person’s bookings, cancellations, or reviews. This project comes amid an outpouring of solidarity statements and policy changes from the tech industry in response to uprisings after the killing of George Floyd by Minneapolis police on May 25.

While these nods toward racial justice may be well-intentioned, they highlight a problem that casts doubt on whether the industry’s efforts to date can truly combat bias: the tendency to position race, not racism, as the cause of discrimination.

This way of thinking about inequality is emblematic of “racecraft,” a term coined by sociologist Karen E. Fields and historian Barbara J. Fields to describe “the mental terrain and pervasive beliefs” about race and racism in America. Though Fields and Fields outline many aspects of the concept, their basic proposition is that the very idea of race arises out of racist practices rather than biological realities. Racecraft, they write, is a “conjuror’s trick of transforming racism into race, leaving black persons in view while removing white persons from the stage.”

A good example can be seen in Airbnb’s introduction to Project Lighthouse, which states that the company was “deeply troubled by stories of travelers who were turned away by Airbnb hosts during the booking process because of the color of their skin.” Were those guests really turned away because of their skin color, or because their prospective hosts were racist?

The same maneuver can be seen in a statement from Adam Mosseri, the head of Instagram, in which he says the platform’s efforts to ensure that Black voices are heard “won’t stop with the disparities people may experience solely on the basis of race.”

Racecraft, as conceptualized by Fields and Fields, is what allows Airbnb and Instagram to transform an aggressive act—racism—into a mere category: race. This sleight of hand positions race as the problem, allowing companies to absolve themselves of responsibility for racism. It also perpetuates the alluring myth that abolishing racial categories will lead to the post-racial society some hoped would follow the election of Barack Obama to the US presidency in 2008.

The truth companies need to grapple with, however, is that racist actions—not racial categories—are what cause discrimination.

I found linguistic evidence of racecraft throughout 63 public-facing documents that I collected and analyzed from Airbnb, Facebook, Twitter, Instagram, TikTok, and YouTube, all issued between May 26 and June 24 of this year. In a moment marked by racial injustice, these companies were reluctant to even use the word “race,” regularly opting to use “diversity” instead.

These statements (including those from TikTok and Facebook) also explicitly address Black people far more frequently than white people by using phrases such as “We stand with the Black community.” In 63 statements, Black people and communities were referenced 241 times while white people were referenced only four times.

By so rarely naming whiteness, these statements normalize the ideas that white people are raceless and that only those oppressed by the racial structure need have any interest in dismantling it. This language also suggests that dismantling racism doesn’t require confronting those privileged by racism.

This critique might seem nitpicky, but the language people use to talk about racism shapes how they understand what’s happening and which solutions sound appropriate. As others have pointed out, for example, the term “officer-involved shooting” is a passive phrasing that deemphasizes police officers’ use of deadly force, obscuring their role in state violence. In the same way, the language in these tech company statements obscures the central role that whiteness and racism play in the injustices Black people endure.

Such obfuscation spills over into the solutions that companies propose. Project Lighthouse, for example, is built to examine the (Black) people who experience racism on Airbnb rather than the (white) people who are responsible for perpetuating it. This again positions race, not racism, as the problem to be overcome. By focusing on race as a category, Airbnb has inscribed the mental tricks of racecraft into its project.

Tech companies and social-media platforms need to understand that fighting racism cannot start and end with statements of solidarity and technical fixes.

Real change begins with increasing the number of people from underrepresented groups in executive positions, which both Airbnb and Facebook pledged to do in their statements. But tech companies cannot think about Black employees as just a convenient resource in times of racial upheaval. In crafting their public statements, many of these companies relied on Black employee groups for assistance. All of Twitter’s statements, for example, were written by employee resource groups—but, as the Washington Post has reported, this work was often unpaid, fell outside employees’ normal duties, and had potential negative ramifications for them.

Bland statements about diversity and inclusion fail to address the long-standing anti-Black injustice that persists in American society. The tech industry must talk about racism in ways that implicate systems of power and call attention to the systemic inequality and racial injustice that Black people face. Only then can the industry produce solutions that reduce harm.

With ongoing unrest in Kenosha, Wisconsin, after yet another case of racialized police violence, we’re sure to see more corporate statements regarding racial justice. Without more awareness of racecraft and its harms, they’re bound to repeat the same mistakes.

Amber M. Hamilton is a PhD candidate in sociology at the University of Minnesota and an affiliate of the Microsoft Research Social Media Collective. Her work focuses on the intersection of race and technology.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.