Image via Jason Koebler on Shutterstock

Opinion

Opinion: Why predictive policing is fundamentally unjust

What happens when we give technology the power to magnify and hyper-exploit our biases?  The question of algorithms outpacing their utility and perpetuating structural violence is no longer a dystopian hypothetical, but rather a terrifying reality of contemporary society.  The roots of predictive policing can be traced back to early experimentation in the 1920s. However,…
<a href="https://highschool.latimes.com/author/nadiachung1/" target="_self">Nadia Chung</a>

Nadia Chung

April 18, 2021

What happens when we give technology the power to magnify and hyper-exploit our biases? 

The question of algorithms outpacing their utility and perpetuating structural violence is no longer a dystopian hypothetical, but rather a terrifying reality of contemporary society. 

The roots of predictive policing can be traced back to early experimentation in the 1920s. However, empowered by the growth of artificial intelligence, this technology became formally used in police departments by 2012.

In using algorithms to analyze data and identify targets for police intervention, predictive policing aims to prevent crime by establishing when, where and by whom future crimes might occur. Police departments use this data and its indication of “hotspots” to determine where to send officers, and often, who they expect to commit crimes. 

In 2014, Chicago police preemptively visited residents who were “marked as likely to be involved in future violent crimes” on a computer-generated “heat-list,” according to the Guardian.

These civilians were suspects for crimes that didn’t even exist and crimes they never committed. Why were the police investigating them?

An algorithm had determined that their gender, race, socioeconomic status and location met the “profile of a criminal.” 

The use of stereotypes to deem someone a criminal is, in itself, blatantly unjust. But, the egregious faults of predictive policing don’t end there.

I see four key issues with predictive policing: entrenching bias, inaccuracy, lack of transparency and human rights abuses. 

First, predictive policing further entrenches bias and prejudice in the criminal justice system. This is, in part, the result of its fundamentally flawed methodology. The use of algorithms to analyze massive quantities of data created feedback loops that reinforce themselves over time.

When police are constantly sent to the same neighborhoods, those will be the neighborhoods where police see crime the most simply because they happen to be present. Thus, the algorithm learns to send more police to those certain neighborhoods. But, according to the NYU Law Review, the lack of observations prevents the algorithm from learning “learning that the crime rates of the two neighborhoods are actually very similar.”

On top of this, predictive policing promotes a dangerous all-or-nothing mentality. This means that if region A has a crime rate of 10% and region B has a crime rate of 11%, then the algorithm will settle on region B.

Given that arrest rates are disproportionately high in Black and Latinx communities in California, the inescapable feedback loops, compounded with the algorithm’s all or nothing mentality, predictive policing serves to perpetuate systemic racism. 

Perhaps even worse, the public’s perception of technology as “decisively correct” may mean that bias is encoded within a program that is masked under a facade of objectivity and (misplaced) trust. 

What makes this most terrifying is that none of these concerns are merely theoretical — they are founded in real-life statistics and the disproportionate wrongful arrests of Black civilians.

When predictive policing is used, Black defendants are two times more likely than white defendants to be incorrectly put on heat lists for committing future crimes and are 77% more likely to be wrongfully pegged a “high risk” suspect of recidivism.

The algorithm’s favor toward some groups and unfair flagging of other groups also reinforces and magnifies wealth bias.

According to the ACLU, predictive policing fails to address white-collar crime by under-investigating and overlooking these sorts of crimes (even though they occur at higher frequencies). Illegal activities such as the use of cocaine and prostitutes are more likely to occur in boardrooms or fancy hotels, not streets of poor neighborhoods, according to the ACLU.

Since that isn’t the target area for police intervention and because white-collar crimes are often intertwined with powerful and influential figures, algorithms don’t send police to those regions. 

Consequently, the rich get away with whatever they please, while the poor are over-policed and often wrongfully accused of crimes. For instance, the child abuse prediction model repeatedly fails poor families due to referral bias, oversampling and the algorithm’s lack of ability to “distinguish between parenting while poor and poor-parenting,” according to the ACLU.

Ultimately, predictive policing is a clear violation of the 14th Amendment’s equal protection clause because the models specifically target based on both race and wealth.

These issues are further exacerbated by the second key issue with predictive policing: the algorithm’s inaccuracy.  

The use of predictive policing programs is plagued by extraordinarily high rates of false positives. Even if the vast majority of these false positives end with the suspect being released or found not guilty, sociological and criminological research has found that even just the process of being accused can lead to stigmatization and even the development of self-fulfilling prophecies.

This should be terrifying to us all when provided the context that “only 20% of the people predicted to commit violent crimes actually went on to do so.” This means that 80% of the people that predictive policing systems flag and encourage the police to convict are innocent. Worse yet, the algorithm was found to be no more accurate than an actual coin flip, when taking into account misdemeanors.

Even just logically looking at the situation, feeding bad data into the system undermines any potential accuracy. As “data collected by the police is notoriously manipulated, glaringly incomplete,” and polluted by bias, predictive policing is giving power to arbitrary data

Taking predictive policing at its best case scenario — if it somehow managed to be accurate — the methodology itself still fails. This is because of a double bind: The more accurate the program is, the less tactical utility it offers.

In regards to predictive technology, “ ‘accurate’ means that the analyst designs an analysis in which as many future crimes as possible fall inside areas predicted to be high-risk,” according to Walter Perry’s “Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations.”

But, in practice, this looks like flagging entire cities as high risk to “gain accuracy.”

For instance, analysts found a hotspot that could capture 99% of future crimes in Washington DC, but upon further evaluation, that hotpot covered more than ⅔ of the city, according to Perry. At that point, one must question if the program is benefiting anyone at all.

The third key issue with predictive policing is its lack of transparency and the public’s inability to audit or check the programs. The predictive policing software is provided to police departments by private companies such as PredPol or Clearview.

Immediately, this raises a few red flags because private companies are inherently less accountable to the public and hold a greater capacity to shield certain information from even the government. In the entire history of Predictive Policing’s existence, there has never been outside verification of any of the programs.

The implications of this are potentially terrifying as we have no way to check the programs that are constantly surveilling us. This denies citizens privacy and arguably denies us a certain level of autonomy. It is also particularly contradictory for democratic countries to use this technology because secrecy fundamentally prevents public participation and engagement.

Furthermore, this runs counter to democracy because government surveillance without due cause (even just generally) fosters distrust and disunity.

The fourth key issue that I see with predictive policing is how it facilitates the propagation of human rights abuses.

Consider the case of China, where predictive policing is fueling a crackdown on ethnic minorities and dissenters. Any sign of political disloyalty can be tracked through wifi activity, bank records, vehicle ownership, and security cameras with facial recognition. By inputting this data into a predictive algorithm, the Chinese police have tortured and punished thousands of civilians. 

Likewise, India’s quest for stopping crime before it starts has inspired their use of facial recognition software in their predictive policing systems. However, that quest is quickly degrading citizens’ rights, in what the Human Rights Watch called a “broken system,” riddled with “dysfunction, abuse, and impunity of the police” in a 2009 report.

The Human Rights Watch also referred to predictive policing in India as “arbitrary arrest, detention, torture, and extrajudicial killings.” While not all countries that use predictive policing are using it to fuel authoritarian agendas, the use of it in even just a couple of countries is an independent reason to push for either its reform or removal. 

In America, as of this year, only four cities have banned predictive policing (all of which are in California). For every other city, predictive policing is completely legal and can be adopted by any police department at any moment.

Looking toward the future of this technology, I find it difficult to picture a sequence of events where the algorithm sheds its methodological flaws and no longer harms innocent people. I can picture a world where predictive policing is repurposed and used exclusively to solve past crimes.

However, in its current state — and in any adaptation where this technology continues to aim toward predicting the future — predictive policing is unjust. 

Opinion: An Assault on Education

Opinion: An Assault on Education

Earlier last month, the Supreme Court struck down race-conscious admissions in cases against Harvard and the University of North California. Just one day later, they ruled that the Biden Administration overstepped with their plan to wipe out $400 billion in student...