Fairness-Aware ML Tutorial Series
This tutorial series offers a hands-on guide to fairness-aware machine learning that targets beginners in Fair-ML.

As AI and ML models increasingly influence important decisions—ranging from loan approvals to hiring processes and criminal justice outcomes—the need for equitable, transparent, and accountable systems has never been more urgent. This tutorial series offers a hands-on guide to fairness-aware machine learning.
📘 Tutorial 1: Introduction to Fairness Framework
Learn about the importance of fairness in AI/ML and how biases arise.
Get introduced to various fairness-focused tools like Fairlearn, AIF360, Themis-ML, and FairBench.
Understand the unique features that make Fairlearn a powerful choice for bias detection and mitigation.
📊 Tutorial 2: Data Customization on ACS-PUMS Data
Walk through the process of acquiring and preparing the ACS-PUMS dataset for fairness analysis.
Explore data cleaning methods and how to select target and feature variables.
Perform basic exploratory data analysis (EDA) to surface potential biases.
📈 Tutorial 3: Fairness Measures in Machine Learning
Discover why fairness metrics are essential for model evaluation.
Learn to calculate popular measures like Demographic Parity and Equalized Odds using Fairlearn.
Interpret these metrics to identify bias and guide subsequent mitigation steps.
⚙️ Tutorial 4: Pre-processing Approaches to Fairness
Explore techniques that address bias before training your model.
Delve into sampling methods (oversampling/undersampling) to balance demographic representation.
Learn how feature engineering and instance reweighting can help reduce discriminatory patterns.
🛠️ Tutorial 5: In-processing & Post-processing in ML
Understand advanced strategies to mitigate bias during and after model training.
Implement in-processing techniques like adversarial training and constrained optimization.
Apply post-processing methods to adjust predictions and improve fairness without retraining.