Crisis Text Line is one of the world's most prominent mental health support lines, a tech-driven nonprofit that uses big data and artificial intelligence to help people cope with traumas such as self-harm, emotional abuse and thoughts of suicide.
But the data the charity collects from its online text conversations with people in their darkest moments does not end there: The organization's for-profit spinoff uses a sliced and repackaged version of that information to create and market customer service software.
Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly "anonymized," stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris' case, by making "customer support more human, empathetic, and scalable."
In turn, Loris has pledged to share some of its revenue with Crisis Text Line. The nonprofit also holds an ownership stake in the company, and the two entities shared the same CEO for at least a year and a half. The two call their relationship a model for how commercial enterprises can help charitable endeavors thrive.
Ethics and privacy experts contacted by POLITICO saw several potential problems with the arrangement.
Some noted that studies of other types of anonymized datasets have shown that it can sometimes be easy to trace the records back to specific individuals, citing past examples involving health records, genetics data and even passengers in New York City taxis.
Others questioned whether the people who text their pleas for help are actually consenting to having their data shared, despite the approximately 50-paragraph disclosure the helpline offers a link to when individuals first reach out.
Former federal regulator Jessica Rich said she thought it would be "problematic" for third-party companies to have access even to anonymized data, though she cautioned that she was unfamiliar with the companies involved.
"It would be contrary to what the expectations are when distressed consumers are reaching out to this nonprofit," said Rich, a former director of the Federal Trade Commission's Bureau of Consumer Protection. She later added: "The fact that the data is transferred to a for-profit company makes this much more troubling and could give the FTC an angle for asserting jurisdiction."
The nonprofit's vice president and general counsel, Shawn Rodriguez, said in an email to POLITICO that "Crisis Text Line obtains informed consent from each of its texters" and that "the organization's data sharing practices are clearly stated in the Terms of Service & Privacy Policy to which all texters consent in order to be paired with a volunteer crisis counselor."
In an earlier exchange, he emphasized that Crisis Text Line's relationship with its for-profit subsidiary is "ethically sound."
"We view the relationship with Loris.ai as a valuable way to put more empathy into the world, while rigorously upholding our commitment to protecting the safety and anonymity of our texters," Rodriguez wrote. He added that "sensitive data from conversations is not commercialized, full stop."
Loris' CEO since 2019, Etie Hertz, wrote in an email to POLITICO that Loris has maintained "a necessary and important church and state boundary" between its business interests and Crisis Text Line.
Suicide hotline shares data with for-profit spinoff, raising ethical questions
The Crisis Text Line’s AI-driven chat service has gathered troves of data from its conversations with people suffering life’s toughest situations.
www.politico.com