Placekey Blog

Product updates, industry-leading insights, and more

Amanda Coston: Understanding and Improving Machine Learning Applications for High-Stakes Scenarios

by Placekey

Amanda Coston is a PhD student in Machine Learning and Public Policy at Carnegie Mellon University (CMU), where her research centers on the impact of algorithmic risk assessments in high-stakes scenarios (such as criminal justice, child welfare screening, and loan approvals). The aim is to understand these deficiencies in order to overcome these limitations and ensure fairness in machine learning applications.

Amanda Coston’s recent research

Check out some of the really cool things Amanda Coston has been working on lately:

Characterizing Fairness Over the Set of Good Models Under Selective Labels

2020, ICML 2021 (to appear)

Often in practice a multiplicity of “good” models achieve overall similar accuracy but differ in their individual predictions. This paper, to be presented at ICML this year, leverages this so-called “Rashomon effect” in order to audit models for disparate impact and, when possible, to find a more equitable model with comparable performance to a benchmark model.

Counterfactual Predictions under Runtime Confounding

2020, NeurIPS 2020

Algorithmic tools used for decision support settings often must grapple with runtime confounding--when some factors that jointly affect the decision and outcome are not available for prediction. This paper, presented at NeurIPS in 2020, proposes a method for such a setting.

Counterfactual Risk Assessments, Evaluation, and Fairness

2019, ACM FAT* 2020

This paper was developed to be presented at ACM FAT 2020. They explore the difficulties of machine learning processes, which rely on historical data for decision-making. While these can be useful, they are not as adept as we think at predicting future outcomes.

Risk Assessments and Fairness Under Missingness and Confounding

2019, ACM Digital Library

Amanda Coston examines the fairness of machine learning, as it is more frequently being employed in high-stakes applications such as criminal justice, child welfare screening, and consumer lending.

To check out all her research, find Amanda Coston on ResearchGate!

What Amanda Coston has been doing with Placekey & SafeGraph data

The study Amanda Coston was a part of served as an independent audit of SafeGraph’s mobility dataset, exploring sampling bias that underrepresents certain demographic groups. By comparing SafeGraph mobility data with voting data, they were able to determine that due SafeGraph’s data underrepresents older age groups and minority groups.

This is in part due to smartphones being used much less frequently by older groups, as well as what demographics opt-in to location tracking services. Ultimately, their findings are not critical of SafeGraph, but point to ways that sampling bias can influence any datasets, and why researchers should be conscious of these unintended biases and the impact they have on study results.

Read the full recap of the seminar here: Mobility Data Used to Respond to COVID19 Could Be Biased or watch the video below: 

Want to learn more about Amanda Coston or get in touch?

Check out what Name Name is doing:

Check out the SafeGraph Community to see who else is doing amazing things using Placekey! Our members are always posting about the incredible things they’re doing with data.

Get ready to unlock new insights on physical places