AI trains counselors to deal with teens in crisis


The chatbot uses GPT-2 for its basic conversational capabilities. This model is trained on 45 million pages of the web, which teaches it the basic structure and grammar of the English language. Project Trevor then trained him further on all of the transcripts of Riley’s previous role-playing conversations, which gave the robot the material he needed to emulate the character.

Throughout the development process, the team was surprised by the performance of the chatbot. There is no database storing Riley’s bio details, but the chatbot has remained consistent as each transcript reflects the same storyline.

But there are also trade-offs to using AI, especially in sensitive contexts with vulnerable communities. GPT-2 and other natural language algorithms like this are known to deeply integrate racist, sexist and homophobic ideas. More than one chatbot has been disastrously misplaced in this way, the most recent being a South Korean chatbot called Lee Luda who had the character of a 20 year old college student. After quickly gaining popularity and interacting with more and more users, he began to use slurs to describe queer and disabled communities.

The Trevor Project recognizes this and has devised ways to limit the risk of problems. While Lee Luda was supposed to converse with users on anything, Riley is very tightly focused. Volunteers will not stray too far from the conversations they were trained on, which minimizes the risk of unpredictable behavior.

It also makes it easier to fully test the chatbot, which Project Trevor says it does. “These highly specialized and well-defined and inclusive-designed use cases do not pose a very high risk,” says Nenad Tomasev, researcher at DeepMind.

From human to human

This is not the first time that the field of mental health has attempted to harness the potential of AI to provide inclusive and ethical assistance without harming the people it is supposed to help. Researchers have developed promising means detect depression from a combination of visual and auditory cues. Therapy “robots”, although not equivalent to a human professional, are launched as an alternative for those who do not have access to a therapist or are uncomfortable confide in a person.

Each of these developments, and others like them, require thinking about the agency’s amount of AI tools to deal with vulnerable people. And the consensus seems to be that at this point the technology is not really suited to replace human aid.

Still, Joiner, the psychology professor, says that could change over time. While replacing human advisers with copies of AI is currently a bad idea, “that doesn’t mean it’s a permanent constraint,” he says. People, “have friendships and artificial relationships” already with AI services. As long as people aren’t tricked into thinking they’re having a chat with a human when talking to an AI, he says, that could be a possibility down the line.

In the meantime, Riley will never be confronted with the young people texting the Trevor Project: it will only ever serve as a training tool for the volunteers. “The human-to-human connection between our advisors and the people who contact us is essential to everything we do,” says Kendra Gaunt, the group’s data and AI product manager. “I think that makes us really unique, and something that I don’t think any of us want to replace or change.”

Leave a Reply

Your email address will not be published. Required fields are marked *