
@ShahidNShah
Recently the FDA, Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA), released their “Guiding Principles for Good Machine Learning Practice” to help the AI/ML industry navigate both patient safety and continuing innovation in new devices and AI/ML algorithms. These include some measures (numbers 1, 3, 4, 5) that are at least partly intended to address the bias issue that can impact the performance of AI/ML tools across diverse populations. As broad-based principles that provide guideposts to the industry, the FDA and related bodies have developed an important set of guidelines. For those in the trenches of device development, however, these principles may not be enough to ensure patient safety, protections against bias, and a number of other ethical issues that emerge in practice. The 10 principles reflect a growing consensus around broad principles that industry can use to balance patient safety with innovation. The field of AI/ML is still seeing a large number of models entering the marketplace with problematic biases. Industry attempts to address the trust issue via tools such as explainable AI have demonstrated mixed results, at best.
Continue reading at chilmarkresearch.com
Whether it be health information collected by wearable devices or lab test results in a patient portal, individuals can now access, share, and use electronic health data to manage their health needs. …
Posted Nov 29, 2021 Application Programming Interface (API) Patient Portals
Connecting innovation decision makers to authoritative information, institutions, people and insights.
Medigy accurately delivers healthcare and technology information, news and insight from around the world.
Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.
© 2025 Netspective Foundation, Inc. All Rights Reserved.
Built on Mar 28, 2025 at 1:45pm