Fairness-aware learning with feature distillation and robust optimization
Fairness is becoming an increasingly important notion to consider while developing machine learning algorithms, particularly when their decisions can profoundly impact human lives. Among several fairness notions, "group fairness" targets achieving statistical parity of model performance across sensitive groups. In this talk, I will present two recent advances in group fairness-aware learning, leveraging feature distillation and robust optimization. Firstly, I will present MFD (MMD-based Fair Distillation), a novel approach that effectively trains a fair student model without compromising accuracy. This is achieved by only distilling the unbiased, predictive features from an unfair teacher model using MMD-based regularization, and I will provide both theoretical insights and empirical evidence to demonstrate the effectiveness of MFD. Secondly, I will introduce FairDRO (Fair Distributionally Robust Optimization), a unifying framework that seamlessly integrates regularization- and reweighting-based methods for group fairness-aware learning. Through a class-wise group DRO framework, I will demonstrate how FairDRO incorporates both approaches simultaneously. Our experimental results illustrate that FairDRO achieves state-of-the-art fairness-accuracy tradeoffs across diverse datasets and model architectures.