Date of Award

1-1-2022

Language

English

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

College/School/Department

Department of Information Science

Content Description

1 online resource (viii, 251 pages) : illustrations (some color)

Dissertation/Thesis Chair

Abebe Rorissa

Committee Members

Luis F. Luna-Reyes, Virginia Eubanks, Joette Stefl-Mabry

Keywords

Algorithmic decision systems, Algorithmic fairness, Algorithmic policing, Data quality, Missingness analysis, Predictive policing, Police, Data mining in law enforcement, Discrimination in law enforcement, Crime forecasting, Criminal behavior, Prediction of

Subject Categories

Library and Information Science

Abstract

Algorithmic governance (AG) systems aim to utilize machine learning (ML) in the form of mathematical modelling and predictive analytics to achieve greater efficiency and accuracy, effect equal treatment and eliminate bias. In its application to policing, algorithmic policing (AP) or predictive policing applies tree-based risk training algorithms on criminal complaint data to predict where and when the next crime is likely to occur, and who the likeliest perpetrators and victims are. Thus, AP preemptively determines where, how, and when to deploy police resources. However, as more and more cities adopt AP, there are fears that AP may be reinforcing age-long inequities and exacerbating residents’ civil rights violations. While the algorithms tend to dominate the discourse on AP fairness, data quality receives little to no attention. Offering a broader and more comprehensive conceptualization of algorithmic fairness and leveraging theoretical frameworks from social informatics, data analytics and the data management body of knowledge, and mindful of the historical entanglements of policing and race in the United States (US), this two-phased mixed-methods dissertation investigated fairness issues in AP through a data quality lens. A missingness (data quality) analysis on crime complaint data used as training sets for AP systems in five US cities was complemented by semi-structured personal interviews with personnel from the New York Police Department -- the largest and one of the oldest US police departments -- on AP and data quality practices and strategies. The interviews were also triangulated with insights from a review of 36 policy documents from the police department. Underlining the critical roles of the data collection context, processes and management practices in determinations of AP fairness, the study found a promising data pipeline taking early policy, accountability and community engagement steps, but also with multiple contextual points of fairness vulnerability including non-ignorable missingness in the datasets, historical (systemic), policy, discretion and interpretation vulnerabilities, transparency and accountability vulnerabilities, public perception and oversight challenges, and a self-fulfilling prophecy conundrum. As a first evidence-based study of data quality in the context of algorithmic policing systems, the findings add a scientific perspective to the algorithmic fairness discourse, drawing attention to the critical role of data quality in determinations of algorithmic policing fairness, highlighting the many potential sources of fairness vulnerability in AP, and recommending practical considerations for a more sustainable practice of AP.

Share

COinS