Product updates, industry-leading insights, and more
Amanda Coston: Understanding and Improving Machine Learning Applications for High-Stakes Scenarios
Amanda Coston is a PhD student in Machine Learning and Public Policy at Carnegie Mellon University (CMU), where her research centers on the impact of algorithmic risk assessments in high-stakes scenarios (such as criminal justice, child welfare screening, and loan approvals). The aim is to understand these deficiencies in order to overcome these limitations and ensure fairness in machine learning applications.
Amanda Coston’s recent research
Check out some of the really cool things Amanda Coston has been working on lately:
Often in practice a multiplicity of “good” models achieve overall similar accuracy but differ in their individual predictions. This paper, to be presented at ICML this year, leverages this so-called “Rashomon effect” in order to audit models for disparate impact and, when possible, to find a more equitable model with comparable performance to a benchmark model.
Algorithmic tools used for decision support settings often must grapple with runtime confounding--when some factors that jointly affect the decision and outcome are not available for prediction. This paper, presented at NeurIPS in 2020, proposes a method for such a setting.
This paper was developed to be presented at ACM FAT 2020. They explore the difficulties of machine learning processes, which rely on historical data for decision-making. While these can be useful, they are not as adept as we think at predicting future outcomes.
What Amanda Coston has been doing with Placekey & SafeGraph data
The study Amanda Coston was a part of served as an independent audit of SafeGraph’s mobility dataset, exploring sampling bias that underrepresents certain demographic groups. By comparing SafeGraph mobility data with voting data, they were able to determine that due SafeGraph’s data underrepresents older age groups and minority groups.
This is in part due to smartphones being used much less frequently by older groups, as well as what demographics opt-in to location tracking services. Ultimately, their findings are not critical of SafeGraph, but point to ways that sampling bias can influence any datasets, and why researchers should be conscious of these unintended biases and the impact they have on study results.