Q&A: Why mental health chatbots need strict safety guardrails

Q&A: Why mental health chatbots need strict safety guardrails

Ramakant Vempati, Wysa cofounder and president, discusses how the startup tests its AI-backed chatbot to monitor safety and quality. Wysa, maker of an AI-backed chatbot that aims to help users work though concerns like anxiety, stress and low mood, recently announced a $20 million Series B funding raise, not long after the startup received FDA Breakthrough Device Designation to use its tool to help adults with chronic musculoskeletal pain. Ramakant Vempati, the company's cofounder and president, sat down with MobiHealthNews to discuss how the chatbot works, the guardrails Wysa uses to monitor safety and quality, and what's next after its latest funding round.

From a product point of view, users may or may not think about it directly, but the safety and the guardrails which we built into the product to make sure that it's fit for purpose in that wellness context is an essential part of the value we provide. When we went live in 2017, I was like, "Will people really talk to a chatbot about their deepest, darkest fears?" You use chatbots in a customer service context, like a bank website, and frankly, the experience leaves much to be desired. I think phase one has been proving to ourselves, really convincing ourselves, that users like it and they derive value out of the service. I think phase two has been to prove this in terms of clinical outcomes. Where we use NLP [natural language processing], we are using NLU, natural language understanding, to understand user context and to understand what they're talking about and what they're looking for. There will always be instances where people say something ambiguous, or they will use nested or complicated sentences, and the AI models will not be able to catch them. And we comply with a safety standard used by the NHS in the U.K. We have a large clinical safety data set, which we use because we've now had 500 million conversations on the platform. Every time we create a new conversation script, we then test with this data set. Vempati: In the early days of Wysa, we used to have people writing in, volunteering to translate. So, it's a combination of market feedback and strategic priorities, as well as what the product can handle, places where it is easier to use AI in that particular language with clinical safety.




Next Article

Did you find this useful?

Medigy Innovation Network

Connecting innovation decision makers to authoritative information, institutions, people and insights.

Medigy Logo

The latest News, Insights & Events

Medigy accurately delivers healthcare and technology information, news and insight from around the world.

The best products, services & solutions

Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.


© 2024 Netspective Foundation, Inc. All Rights Reserved.

Built on Dec 20, 2024 at 12:59pm