Data science and machine learning methods to improve social equity
Our society remains profoundly inequitable, due in part to biases in human and algorithmic decision-making. Addressing this, we propose data science and machine learning methods to improve the fairness of decision-making. First, we develop scalable Bayesian methods for assessing bias in human decision-making and apply these methods to measure discrimination in police traffic stops across the United States. Second, we develop methods to address an important source of bias in algorithmic decision-making: when the target the algorithm is trained to predict is an imperfect proxy for the desired target. We show how to leverage plausible domain knowledge in two real-world settings — flood detection and medical testing — to detect and mitigate target variable bias.