Developing trust in healthcare AI, step by step

Developing trust in healthcare AI, step by step

A new Chilmark Research report by Dr. Jody Ranck, the firm's senior analyst, explores state-of-the-art processes for bias and risk mitigation in artificial intelligence that can be used to develop more trustworthy machine learning tools for healthcare. As the usage of artificial intelligence in healthcare grows, some providers are skeptical about how much they should trust machine learning models deployed in clinical settings. Available by subscription or purchase, the report outlines steps that should be taken to ensure good data science – including how to build diverse teams capable of addressing the complexities of bias in healthcare AI, based on government and think tank research. "Greater focus on evidence-based AI development or deployment requires effective collaboration between the public and private sectors, which will lead to greater accountability for AI developers, implementers, healthcare organizations and others to consistently rely on evidence-based AI development or deployment practices," said Roski. Ranck, the Chilmark report's author, hosted an April podcast interview with Dr. Tania Martin-Mercado, digital advisor in healthcare and life sciences at Microsoft, about combating bias in AI.




Next Article

Did you find this useful?

Medigy Innovation Network

Connecting innovation decision makers to authoritative information, institutions, people and insights.

Medigy Logo

The latest News, Insights & Events

Medigy accurately delivers healthcare and technology information, news and insight from around the world.

The best products, services & solutions

Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.


© 2024 Netspective Foundation, Inc. All Rights Reserved.

Built on Nov 22, 2024 at 12:50pm