Judges across US Using Racially Biased Software in Assessing Defendants’ Risk of Committing Future Crimes

by Vins
Published: Updated:

In 2014, then US Attorney General Eric Holder warned that so-called risk scores might be injecting bias into the nation’s judicial system. As ProPublica reported in May 2016, courtrooms across the country use risk scores, also known as risk assessments, to rate a defendant’s risk of future crime and, in many states—including Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin—to unofficially inform judges’ sentencing decisions. The Justice Department’s National Institute of Corrections now encourages the use of such assessments at every stage of the criminal justice process.

Although Holder called in 2014 for the US Sentencing Commission to study the use of risk scores because they might “exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system,” the Sentencing Commission never did so. Angwin, Larson, Mattu, and Kirchner’s article reports the findings of an effort by ProPublica to assess Holder’s concern. As they report, ProPublica “obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years.” The ProPublica study was specifically intended to assess whether an algorithm known as COMPAS, or Correctional Offender Management Profiling for Alternative Sanctions, produced accurate results in its efforts to predict “criminal personality,” “social isolation,” “substance abuse” and “residence/stability.”

Judges across the country are given risk ratings based on the COMPAS algorithm or comparable software. Broward County, Florida—the focus of ProPublica’s study—does not use risk assessments in sentencing, but it does use them in pretrial hearings, as part of its efforts to address jail overcrowding. As ProPublica reported, judges in Broward County use risk scores to determine which defendants are sufficiently low risk to be released on bail pending their trials.

Based on ProPublica’s analysis of the Broward County data, Angwin, Larson, Mattu, and Kirchner reported that the risk scores produced by the algorithm “proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” In fact, the algorithm was “somewhat more accurate” than a coin flip. The study also found significant racial disparities, as Holder had feared. “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants,” they reported.

This disparity is not explained by defendants’ prior crimes or they types of crime for which they were arrested. After running a statistical test that controlled for the effects of criminal history, recidivism, age, and gender, black defendants were still 77 percent more likely to identified as a higher risk to commit a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind, compared with their white counterparts.

Northpointe, the for-profit company that created COMPAS, disputed ProPublica’s analysis. However, as ProPublica noted, Northpointe deems its algorithm to be proprietary, so the company will not publicly disclose the calculations COMPAS uses to determine defendants’ risk scores, making it impossible for either defendants or the public “to see what might be driving the disparity.” In practice, this means that defendants rarely have opportunities to challenge their assessments.

As ProPublica reported, the increasing use of risk scores is controversial and has garnered media coverage, including articles by the Associated Press, and the Marshall Project and FiveThirtyEight last year.

Sources:

Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias”, ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin, “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, May 23, 2016, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

Student Researcher: Hector Hernandez (Citrus College)

Faculty Evaluator: Andy Lee Roth (Citrus College)