
@ShahidNShah
A new Chilmark Research report by Dr. Jody Ranck, the firm's senior analyst, explores state-of-the-art processes for bias and risk mitigation in artificial intelligence that can be used to develop more trustworthy machine learning tools for healthcare. As the usage of artificial intelligence in healthcare grows, some providers are skeptical about how much they should trust machine learning models deployed in clinical settings. Available by subscription or purchase, the report outlines steps that should be taken to ensure good data science – including how to build diverse teams capable of addressing the complexities of bias in healthcare AI, based on government and think tank research. "Greater focus on evidence-based AI development or deployment requires effective collaboration between the public and private sectors, which will lead to greater accountability for AI developers, implementers, healthcare organizations and others to consistently rely on evidence-based AI development or deployment practices," said Roski. Ranck, the Chilmark report's author, hosted an April podcast interview with Dr. Tania Martin-Mercado, digital advisor in healthcare and life sciences at Microsoft, about combating bias in AI.
Continue reading at healthcareitnews.com
Evernorth, the health services arm of insurer Cigna, announced it has added five new programs to its digital health formulary, including offerings from Big Health and Quit Genius. The new additions to …
Posted Sep 17, 2022 Healthcare Digital Health
Connecting innovation decision makers to authoritative information, institutions, people and insights.
Medigy accurately delivers healthcare and technology information, news and insight from around the world.
Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.
© 2025 Netspective Foundation, Inc. All Rights Reserved.
Built on Feb 21, 2025 at 1:11pm