Addressing the Harmful Effects of Predictive Analytics Technologies
The Challenge: Inequitable Consequences of Predictive Analytics Technology
Predictive analytics are information technologies that learn from historical data to predict future behavior or outcomes of individuals or groups to inform better decisionmaking.1 They often employ data-mining techniques to identify patterns in large data sets and apply mathematical formulas to assess probabilities associated with different variables and outcomes. In commercial settings, predictive analytics enables recommendation features on entertainment services like Netflix, advertising platforms like Instagram, or shopping platforms like Amazon. However, its increasingly common use in the public sector creates problems in sensitive social domains.
In law enforcement, “predictive policing” technologies are used to predict where a crime may occur or who may be a victim or perpetrator of a crime in a given window of time, yet the technology’s reliance on biased police data can lead to its predictions perpetuating discriminatory practices and policies.2 In housing, coordinated entry assessment tools are used to predict vulnerability within the underhoused population to prioritize allocation of housing assistance opportunities (such as emergency shelter or permanent supportive housing), but research has demonstrated that the most prominent tool is biased against Black, Indigenous and people of color individuals.3 In child welfare, predictive risk-modeling tools are used to predict maltreatment by caregivers to inform decisions made by agency workers; but biased agency data and predictive variables lead these tools to assign higher risk scores to poor and minority families, which results in negative or punitive actions.4 Since predictive analytics necessarily relies on historical data, when it is used in sectors with complicated social contexts and histories, the technology runs a high risk of reproducing and reinforcing historical practices, policies, and conditions. Compounding these concerns is the fact that the predictions produced by these technologies are generalizations, rather than the individualized assessments that should be considered for consequential decisions—like whether to provide temporary housing or to remove a child from a home.
Currently there are no laws or regulations to govern the design and use of predictive analytics technologies. The lack of constraints means that important societal questions—such as what to predict, what variables to include in prediction algorithms, the weight assigned to each variable, and standards for accuracy—are left to the discretion of engineers and data scientists and not subject to any form of public accountability. These concerns are exacerbated by the fact that the risks posed by predictive analytics technologies are not always immediately apparent, and there are often legal and practical impediments to redressing harms. For example, in law enforcement, housing, and child welfare, individuals harmed by decisions made using predictive analytics would not initially know that a technology was used in decisionmaking. Additionally, traditional means of redress, such as administrative appeals, may be ill-suited for mitigating the legal concerns posed by predictive analytics due to the lack of transparency regarding how these technologies work and the novelty of their use in the public sector.5
The Solution: Leveraging Existing Policy Approaches for High-Risk Technologies
Since the implications of predictive analytics technologies can vary across sectors, initial policy interventions must be diagnostic or investigatory, but also responsive to immediate concerns. The following three proposals are derived from existing draft legislation targeting high-risk technologies, and each attempts to leverage pertinent information to inform and identify long-term solutions.
Moratorium and Impact Study on Long-Term Validity of Predictive Analytics in Government
Considering the immediate and varied harms associated with the current use of predictive analytics in the public sector, a moratorium should be established to mitigate further harm.6 This legislative intervention should also require and fund an impact study on the use of the technology within government, the potential benefits and risks, issues that require further study before government use is permissible, and recommendations to address challenges and opportunities.7 The impact study should be co-led by the Government Accountability Office and the National Institute of Standards and Technology, and it should require consultation with experts and local communities where predictive analytics have been in use.
Transparency Requirements
While evidence of predictive analytics use within various government sectors is emerging through investigative reporting,8 research,9 and some official disclosures,10 the full spectrum of uses within federal, state, and local governments remains uncertain. Thus, legislation should mandate annual public disclosures of predictive analytics technologies acquired or used with federal funds along with details regarding use and outcomes.11 Such transparency requirements can offer insightful information about the prevalence and impact of this technology.
Algorithmic Impact Assessments
Algorithmic impact assessments seek to evaluate the risks of data-driven technologies by combining public agency review and public input to inform necessary safeguards to minimize risks.12 Such assessments have been implemented in Canada13 and there are U.S. legislative proposals14 that include this intervention, though some are targeted at commercial entities rather than government agencies. Complementing the above proposals, algorithmic impact assessments offer useful information about the potential benefits and challenges of predictive analytics. They should also incorporate public consultation and require government agencies to proactively assess the necessity of formal policies and safeguards to mitigate risks.
Conclusion
Government decisions that are likely to seriously impact individuals’ lives should not be made in a black box. Preventing the harms of predictive analytics will require the study of the technology’s use and potential for abuse, strict transparency obligations when it is used, and impact assessments of predictive algorithms. The onus must be on the government to prove that the tools it uses do not exacerbate past and present inequities if we are to allow these technologies to contribute to public decisionmaking.
Download the full report »
Photo Credit: Wright Studio / Shutterstock
Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy & Law, and a senior fellow with GMF Digital.
1 Eric Siegal, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, Wiley, 2016.
2 Rashida Richardson, Jason M. Schultz, and Kate Crawford, “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review Online, 2019.
3 Catriona Wilkey et al, Coordinated Entry Systems: Racial Equity Analysis of Assessment Data, C4 Innovations, 2019.
4 Virginia Eubanks, Automating Inequality: How Hight-Tech Tools Profile, Police, and Punish the Poor, St. Martin’s Press, 2018.
5 Robert Brauneis and Ellen P. Goodman, “Algorithmic Transparency for the Smart City,” Yale Journal of Law & Technology, 2018.
6 See, for example, S.4084 – Facial Recognition and Biometric Technology Moratorium Act of 2020, Congress, introduced June 25, 2020.
7 See H.R.6929 – Advancing Facial Recognition Act, Congress, introduced May 19, 2020; and H.R.827 – AI JOBS Act of 2019, Congress, introduced January 28, 2019.
8 Kathleen McGrory and Neil Bedi, “Pasco’s Sheriff Created a Futuristic Program to Stop Crime Before it Happens. It Monitors and Harasses Families Across the County,” Tampa Bay Times, September 3, 2020.
9 Catriona Wilkey et al, “Coordinated Entry Systems. 10 Alleghany County, Alleghany Family Screening Tool, 2020.
11 See S.2689 – No Biometric Barriers to Housing Act of 2019, Congress, introduced October 23, 2019.
12 Dillon Reisman et al, Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute, 2018; and Ansgar Koene et al, A Governance Framework for Algorithmic Accountability and Transparency, European Parliamentary Research Service, European Parliamentary Research Service, 2019.
13 Government of Canada, Algorithmic Impact Assessment Webpage, 2020.
14 See S.1108 – Algorithmic Accountability Act of 2019, Congress, introduced October 4, 2019; and S.2637 – Mind Your Own Business Act of 2019, Congress, introduced October 17, 2019,