Towards transparency, fairness, and efficiency in machine learning
In this talk, we will address several areas of recent work centered around the themes of transparency and fairness in machine learning as well as practical efficiency for methods with high dimensional data. We will discuss recent results involving linear algebraic tools for learning, such as methods in non-negative matrix factorization. We will showcase our derived theoretical guarantees as well as practical applications of those approaches. These methods allow for natural transparency and human interpretability while still offering strong performance. Then, we will discuss new directions in fairness including an example in large-scale optimization that allows for population subgroups to have better predictors than when treated within the population as a whole. We will conclude with work on compression and reconstruction of large-scale tensorial data from practical measurement schemes. Throughout the talk, we will include example applications from collaborations with community partners.