Removing Unfair Bias in Machine Learning

Cover Photo

Jan

15

5:30pm

Removing Unfair Bias in Machine Learning

By IBM Developer

Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline? In this talk you'll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360.
AI Fairness 360 (AIF360) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.
In this talk you'll learn:
- How to measure bias in your data sets & models
- How to apply the fairness algorithms to reduce bias
- An introductory look at how bias & discrimination can arise within modern machine learning techniques and the methods that can be implemented to tackle those challenges.
- Learn how to evaluate the metrics using the open-source AI Fairness 360 Toolkit to check for fairness and mitigate machine learning model bias.

hosted by

IBM Developer

IBM Developer

share

Open in Android app

for a better experience