How AI bias happens – and how to eliminate it

How AI bias happens – and how to eliminate it

One challenge for healthcare CIOs and clinical users of AI-powered health technologies is the biases that may pop up in algorithms. These biases, such as algorithms that improperly skew results because of race, can compromise the ultimate work of AI – and clinicians. Bias in AI occurs when results cannot be generalized widely. We often think of bias resulting from preferences or exclusions in training data, but bias can also be introduced by how data is obtained, how algorithms are designed, and how AI outputs are interpreted. How does bias get into AI? Everybody thinks of bias in training data – the data used to develop an algorithm before it is tested on the wide world. All data is biased. AI is trained to learn patterns in data. If a particular dataset has bias, then AI – being a good learner – will learn that too. "All data is biased. On another front, AI algorithms are designed to learn patterns in data and match them to an output. This process of feature extraction, for which many techniques exist, can introduce bias by discarding information that could make the AI smarter during wider use – but that are lost even if the original data was not biased. This may be to reduce bias in data collection, in the training set, in the algorithm, or to attempt to broaden the usefulness.




Next Article

Did you find this useful?

Medigy Innovation Network

Connecting innovation decision makers to authoritative information, institutions, people and insights.

Medigy Logo

The latest News, Insights & Events

Medigy accurately delivers healthcare and technology information, news and insight from around the world.

The best products, services & solutions

Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.


© 2024 Netspective Foundation, Inc. All Rights Reserved.

Built on Nov 5, 2024 at 4:53am