@ShahidNShah
Q&A: Why mental health chatbots need strict safety guardrails
Ramakant Vempati, Wysa cofounder and president, discusses how the startup tests its AI-backed chatbot to monitor safety and quality. Wysa, maker of an AI-backed chatbot that aims to help users work though concerns like anxiety, stress and low mood, recently announced a $20 million Series B funding raise, not long after the startup received FDA Breakthrough Device Designation to use its tool to help adults with chronic musculoskeletal pain. Ramakant Vempati, the company's cofounder and president, sat down with MobiHealthNews to discuss how the chatbot works, the guardrails Wysa uses to monitor safety and quality, and what's next after its latest funding round.
From a product point of view, users may or may not think about it directly, but the safety and the guardrails which we built into the product to make sure that it's fit for purpose in that wellness context is an essential part of the value we provide. When we went live in 2017, I was like, "Will people really talk to a chatbot about their deepest, darkest fears?" You use chatbots in a customer service context, like a bank website, and frankly, the experience leaves much to be desired. I think phase one has been proving to ourselves, really convincing ourselves, that users like it and they derive value out of the service. I think phase two has been to prove this in terms of clinical outcomes. Where we use NLP [natural language processing], we are using NLU, natural language understanding, to understand user context and to understand what they're talking about and what they're looking for. There will always be instances where people say something ambiguous, or they will use nested or complicated sentences, and the AI models will not be able to catch them. And we comply with a safety standard used by the NHS in the U.K. We have a large clinical safety data set, which we use because we've now had 500 million conversations on the platform. Every time we create a new conversation script, we then test with this data set. Vempati: In the early days of Wysa, we used to have people writing in, volunteering to translate. So, it's a combination of market feedback and strategic priorities, as well as what the product can handle, places where it is easier to use AI in that particular language with clinical safety.
Continue reading at mobihealthnews.com
Make faster decisions with community advice
- 4medica’s Healthcare Data Quality Platform Uses Artificial Intelligence and Machine Learning to Reduce Costly Patient Matching Errors
- Data Lake uses blockchain technology for the medical data donation system
- Ion and Houston Methodist to Open Healthcare Innovation Tech Hub
- PayZen Announces Partnership with Iowa Hospital Association
- NSF and Amazon award $1M for healthcare AI integrity
Next Article
-
NSF and Amazon award $1M for healthcare AI integrity
The Fairness in AI program from the National Science Foundation and Amazon is meant to address challenges around bias in machine learning and speech recognition tools. For this third round of Fairness …
Posted Aug 21, 2022 Healthcare Artificial Intelligence