Oct
11
2:00pm
Learn to Adopt responsible AI that will help you build Build Ethical Models
By IBM Developer
π Overview
Fairness is the process of understanding bias in your data. Explainability shows how a machine learning model makes its predictions. Lastly, robustness measures the stability of the algorithm performance.
In this webinar, you will learn how to use a fraud dataset to predict fraudulent transactions, to reduce monetary loss and mitigate risks. You will learn to address three of the pillars of building trustworthy AI pipelines (Fairness, Explainability, and Robustness of the predictive models), and enhance the effectiveness of the AI predictive system.
π What will you learn?
- The pillars of building trustworthy AI pipelines
- Check fairness of data set using AI 360 Fairness Toolkit
- Build the machine learning model
- Explain the model using the AI 360 Explainability Toolkit
π©βπ» Who should attend
- Anyone who is interested in building Machine Learning models
- This is a beginner to intermediate session
π©βπ« Prerequisites
- Log in or sign up for a free IBM Cloud Account: https://ibm.biz/aiethics
- Register for the live stream or to watch the replay: https://www.crowdcast.io/e/adopt-responsible-ai
- If you've never heard of AI Ethics before, watch this quick video: https://www.youtube.com/watch?v=aGwYtUzMQUk
- Read more about Trustworthy AI here: https://www.ibm.com/watson/trustworthy-ai
ποΈ Speakers
- Anam Mahmood - Developer Advocate, IBM, https://www.linkedin.com/in/anam-mahmood-sheikh/
- Hashim Noor- Client Technical Specialist, IBM, https://www.linkedin.com/in/hashim-noor/
----------------------------------------------------------------------------
*By registering for this event, you acknowledge this video will be recorded and consent for it to be featured on IBM media platforms and pages.
hosted by

IBM Developer
share