clock menu more-arrow no yes mobile

Filed under:

How data scientists are using AI for suicide prevention

The Crisis Text Line uses machine learning to figure out who’s at risk and when to intervene.

Getty Creative Images
Brian Resnick is Vox’s science and health editor, and is the co-creator of Unexplainable, Vox's podcast about unanswered questions in science. Previously, Brian was a reporter at Vox and at National Journal.

When horrible news — like the deaths by suicide of chef, author, and TV star Anthony Bourdain and fashion designer Kate Spade, or the 2015 Paris attacks — breaks, crisis counseling services often get deluged with calls from people in despair. Deciding whom to help first can be a life-or-death decision.

At the Crisis Text Line, a text messaging-based crisis counseling hotline, these deluges have the potential to overwhelm the human staff.

So data scientists at Crisis Text Line are using machine learning, a type of artificial intelligence, to pull out the words and emojis that can signal a person at higher risk of suicide ideation or self-harm. The computer tells them who on hold needs to jump to the front of the line to be helped.

They can do this because Crisis Text Line does something radical for a crisis counseling service: It collects a massive amount of data on the 30 million texts it has exchanged with users. While Netflix and Amazon are collecting data on tastes and shopping habits, the Crisis Text Line is collecting data on despair.

The data, some of which is available here, has turned up all kinds of interesting insights on mental health. For instance, Wednesday is the most anxiety-provoking day of the week. Crises involving self-harm often happen in the darkest hours of the night.

CTL started in 2013 to serve people who may be uncomfortable talking about their problems aloud or who are just more likely to text than call. Anyone in the United States can text the number “741741” and be connected with a crisis counselor. “We have around 10 active rescues per day, where we actually send out emergency services to intervene in an active suicide attempt,” Bob Filbin, the chief data scientist at CTL, told me last year.

Overall, the state of suicide prevention in this country is extremely frustrating. The Centers for Disease Control and Prevention recently reported that between 1999 and 2016, almost every state saw an increase in suicide. Twenty-five states saw increases of 30 percent or more. Yet even the best mental health clinicians have trouble understanding who is most at risk for self-harm.

"We do not yet possess a single test, or panel of tests that accurately identifies the emergence of a suicide crisis," a 2012 article in Psychotherapy explains. And that’s still true.

Doctors understand the risks for suicide ideation better than they understand the risk for physical self-harm. Complicating matters, the CDC finds 54 percent of suicides involve persons with no known mental health issues.

But we can do better. And that’s where data scientists like Filbin think they can help fill in the gaps, by searching through reams of data to determine who is at greatest risk and when to intervene. Even small insights help. For example, the crisis text line finds when a person mentions a household drug, like “Advil,” it’s more predictive of risk than if they used a word like “cut.”

Sending help to people in crisis is just the start. The hotline hopes its data could one day actually help predict and prevent instances of self-harm from happening in the first place. In 2017, I talked to Filbin about what data science and artificial intelligence can learn about how to help people. This conversation has been edited for length and clarity.

The crisis text line is a 24/7 service to help people through their toughest times

Brian Resnick

Tell me about the service Crisis Text Line provides.

Bob Filbin

The idea is that a person in crisis can reach out to us — no matter the issue, no matter where they are — 24/7 via text and get connected to a volunteer crisis counselor who has been trained. The great thing about that is people have their phones everywhere: You can be in school, you can be at work, and whenever a crisis occurs, we want to be immediately accessible at the time of crisis.

[Suicide attempt] is the greatest type of risk that we're trying to prevent.

Brian Resnick

You’re also collecting data on these interactions. Why is that necessary to run the service?

Bob Filbin

We have 33 million messages exchanged with texters in crisis. We have had over 5,300 active rescues [where they’ve dispatched emergency services to someone attempting suicide] and the entire message and conversations associated with those.

Our data gives the rich context around why a particular crisis event happened. We get both the cause and the effect, and we get how they actually talk about these issues. I think it's going to provide a lot more context on how can we actually spot these events and prevent them.

How can we spot these events before they occur? Seeing the actual language that people use is going to be critical to [answer] that.

From the very beginning, we believed in the idea that our data could help to improve the crisis space as a whole. By collecting this data and then sharing it with the public, with policymakers, with academic researchers ... it could provide value to people in crisis whether or not they actually used our service.

How the Crisis Text Line deals with “spike events”

Brian Resnick

So, someone in crisis texts your service. I’m curious about the specific data you’re collecting from that interaction.

Bob Filbin

There are three types of data we’re collecting:

The conversation — the exchange between a texter and a crisis counselor, and then a lot of metadata around those conversations (timestamps; the people who were involved: the crisis counselor, the texter). If the texter comes back, we know, okay, this is a texter that has used our service before, or this crisis counselor has had other [interactions with the texter].

Second: After the conversation, we ask our crisis counselors [questions like], “What issues came up in the conversation?" and, "What was the level of risk?"

The third piece: a set of survey questions to the texter asking for feedback: Was this conversation helpful? What type of help did you experience? What did you find valuable? What would you hope would be different for other people in crisis? Or the same.

Brian Resnick

So how has this data helped you do your job better?

Bob Filbin

We've seen problems arise, and then we think about how to use data to solve these problems.

So one problem is spike events.

A spike is when we see a huge influx in texter demand — this happens to crisis centers all the time. When Robin Williams died by suicide, around the election, around the Paris terrorist attacks in 2015, or perhaps somebody who used the service and found it beneficial, and that goes viral — we’ll have a big spike event. Our daily volume will more than double.

Brian Resnick

So how do you respond to that?

Bob Filbin

Traditionally, crisis centers respond to people in the order in which they come into the queue. But if you double your volume instantly, that's going to lead to long wait times.

We want to help the people who have the highest-severity cases first: We want to help somebody who's feeling imminently suicidal before somebody who's having trouble with their girlfriend or boyfriend or something.

We trained an algorithm. We asked, “What do texters say at the very beginning of a conversation that is indicative of an active rescue?”

So we can triage and prioritize texters who say the most severe things.

How machine learning can help

Brian Resnick

When you say you're “asking” these questions, you mean you’re using a computer? Machine learning or some type of AI?

Bob Filbin

We're asking a computer to figure it out.

[We ask it] basically, look at active rescue conversations and is there anything different about the initial messages that texters send in those.

Brian Resnick

So what type of words did the computer pick out that indicated imminent risk?

Bob Filbin

Before we used the computer, we had a list of 50 words that [we thought] were probably indicative of high risk. Words like “die,” “cut,” “suicide,” “kill,” etc.

When a data scientist ran the analysis, he found thousands of words and phrases indicative of an active rescue that are actually more predictive.

Words like “Ibuprofen” and “Advil” and other associated words [i.e., common household drugs] were 14 times as indicative of suicide as a predictor.

Brian Resnick

Wow.

Bob Filbin

Even the crying face emoticon — that’s 11 times as predictive as the word “suicide” that somebody's going to need an active rescue.

This allows us to do a task that we never otherwise would have been able to do. We just don't have staff to triage. And most [other crisis] centers don't either. So then people who are high-risk get stuck waiting for two hours.

That’s the whole idea and the power, really, of AI — it gets smarter over time. We started at a baseline of “here's what crisis centers do already. How can we use data to improve on that?” We’re never rolling out a product that decreases performance. And that’s something we always monitor.

What about privacy?

Brian Resnick

How do you keep all this information private?

Bob Filbin

Privacy of our texters is our No. 1 concern. One really concrete way we do that is we have this keyword that texters can text at any time — it's "loofah" — which allows you to scrub your data at any time. (It's kind of a pun. Or a lame joke.) We want to make sure our texters are in charge of the data. And we do have an algorithm we built to scrub personally identifiable information. It removes names, locations, phone numbers, emails, birth dates, or other numbers like that.

This is all after the data is encrypted. So there are many layers of security and privacy that we go through to protect the data.

Brian Resnick

A lot of therapists and people who work in the therapy space have a long-held aversion to data, relying instead on their experience and intuition. (The Atlantic recently had an interesting feature on this.) What do you say to people who may be skeptical of using data in crisis counseling?

Bob Filbin

So the underlining philosophy that I'm always coming back to for data science is, "Can we provide the most possible value back to our users?" And the way you do that is by learning from every interaction.

That's true about the smartest people in the world, right? You reflect on an experience, you learn from it, and you then you behave differently next time. I think data is a way to do that at scale. We're saying, no human can understand the scope and reflect on the performance and service as a whole, so we need data to allow us to reflect and improve.

We're helping people who are in imminent crisis, maybe imminently suicidal. And so we want to make sure we put any recommendations from our computer systems through the lens of a person, that a person is evaluating whether or not it's the correct decision.

Brian Resnick

What do you mean by “put any recommendations from our computer systems through the lens of a person”?

Bob Filbin

One of the first major insights from our data was that we had about 3 percent of our texters using 34 percent of our crisis counselors' time — draining a huge percentage of our crisis counselors’ resources.

They were treating us, in some sense, as a replacement [for] therapy, not as a crisis service.

That was a problem that was exposed in the data, but the solution wasn't intuitive. It was actually kind of a philosophical decision as an organization to say, we're here for people in crisis, we aren't replacement therapy, that's a very different-looking service.

Our goal is going to be to identify these [frequent] texters who may need [long-term] therapy.

The data really exposed that difficult question and forced us as an organization to make a philosophical stand that we're here for people in crisis. And so it does lead to, really, data and that revelation leads to difficult decisions, and that does happen.

Brian Resnick

Are you scared of using data alone to make that decision to nudge people off the service? What if one of those circlers truly is in a crisis?

Bob Filbin

The first step is we put this through our supervisors. We flag a texter who appears to be circling. Then the supervisor makes the call. We always have the human making the decision.

We don't block anybody, because that person could legitimately have a huge crisis. We never block people from entering the system. But we do limit or throttle access.

Putting it through that human lens always is very important because we don't want to miss anything. And it’s so critical that we're there for all these people.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.