Ihaka 2019: Algorithmic fairness: Examples from predictive models for criminal justice
University of Auckland | Waipapa Taumata Rau University of Auckland | Waipapa Taumata Rau
22.1K subscribers
1,690 views
0

 Published On Streamed live on Mar 26, 2019

Dr Kristian Lum
Human Rights Data Analysis Group (HRDAG)

Predictive models are increasingly used in the criminal justice
system to predict who will commit crime in the future and where
crimes will occur. Because decisions influenced by models in this
setting impact individuals’ liberty, it is of the utmost importance
that predictions generated by the models be ‘fair’.

Using examples from predictive policing and recidivism
risk assessment, Dr Kristian Lum will demonstrate how – if
considerations of fairness and bias are not explicitly accounted
for – such models could perpetuate and, under some
circumstances, amplify undesirable historical biases encoded in
the data.

Dr Lum will then give a brief overview of several notions of
fairness that have been proposed in the ‘algorithmic fairness’
literature as solutions to these problems. She will close with a
discussion of the ways in which policy, rather than data science,
influence the development of these models and some alternative
non-algorithmic solutions to the underlying problems these
models seek to address.

Dr Kristian Lum is Lead Statistician at the Human Rights Data
Analysis Group (HRDAG), where she leads the HRDAG project on
criminal justice in the United States.

Dr Lum’s research primarily focuses on examining the uses
of machine learning in the criminal justice system and has
concretely demonstrated the potential for machine learning-based predictive policing models to reinforce and, in some cases,
amplify historical racial biases in law enforcement.

show more

Share/Embed