The Los Angeles Police Department embraced predictive policing in 2015, but it has taken until now for the department’s assortment of once-shadowy data-based operations to be thoroughly vetted in public.
In the end, that’s the essential problem to be solved — the lack of transparency and public accountability in deploying crime-targeting tools that could so easily be misused to oppress rather than protect neighborhoods already struggling with both crime and heavy-handed policing. It took years of work by activists to bring programs like LASER (a data-crunching operation that identifies crime hot spots) and PredPol (a software program that predicts property crimes) into the light of day, and they are to be commended; but they are off base in their demands that police scrap the tools entirely.
Data, used properly, can enhance public safety. Police should be encouraged to use it, as long as they are open about what they are doing, and as long as they heed legitimate criticism and adjust their programs accordingly. Failure to carefully tailor predictive policing programs invites invasion of privacy, racial profiling and other unacceptable side effects.
Problems with the LAPD’s predictive policing project were outlined in a report presented to the Police Commission on Tuesday by Inspector General Mark Smith. Smith found that officers used inconsistent criteria in targeting and tracking people they considered to be most likely to commit violent crimes.
The department is due to respond in full on April 9, but LAPD Chief Michel Moore already told the commission that he would make some adjustments to the program.
The importance of data in policing should be obvious, and in concept is nothing new. Police have always kept an eye out for felons who return to their old neighborhoods after their release from prison. Gang leaders have long been watched to ensure that they don’t restart criminal enterprises that were shut down when they were incarcerated. Now, with the advent of newer technologies, algorithms and other computer programs can even more effectively predict which people in the community bear closer scrutiny.
Promoters of computerized risk-assessment tools argue that the programs eliminate whatever idiosyncrasies or biases that individual officers may bring when they act merely on their own hunches, and can help police more accurately target people most likely to commit violent crimes.
But that enthusiasm may be unwarranted. Opponents make a good case that, instead of eliminating bias, algorithms actually enhance it.
Consider, for example, a program that measures a person’s likelihood to be arrested based on a set of factors that include how many times he’s been arrested previously, whether he is on probation or parole, and how many crimes have been committed in the neighborhood where he lives. Using that data, police may find that people who have been arrested three times are likely to be arrested again, cueing officers that they should be tracking others in that situation.
The problem is that we also have data that show police arrest African Americans and Latinos more often than whites who have committed the same crimes, in part because their neighborhoods are more heavily policed. They are also prosecuted more often for the same crimes, so end up in jail or on probation and parole more often for the same crimes. If the algorithm crunches arrest, incarceration and probation or parole data and then spits out a risk assessment, it will signal to cops that the black or Latino subjects — already subject to unequal criminal justice treatment — ought to be more closely watched. The cycle of inequity will be repeated, this time enhanced by the data “science” that is supposed to erase bias.
This same issue is at the center of discussions about using risk assessment tools in bail reform, where algorithms are proposed in lieu of printed bail schedules to help judges decide which suspects to keep in jail and which to release without bail pending trial. It shows up in child welfare policy, where “structured decision-making” programs are meant to help social workers decide when to remove children from their homes and when to leave them in place. It is likely to appear in parole hearings, as boards weigh whether to keep convicts behind bars or set them free. In each arena, there is a legitimate concern that algorithms will perpetuate long-standing biases rather than eliminate them. Along the way, it will turn decision makers into tools, rather than the other way around.
Many activists believe that the misuse of data is inevitable and therefore that these techniques ought to be scrapped entirely. But that would leave us with police who bumble in darkness. That’s no recipe for either equity or public safety.