| Algorithms of injustice | MR Online

Algorithms of injustice: Artificial intelligence in policing and surveillance

Originally published: Red Flag on November 26, 2021 by Roxanne Kelly (more by Red Flag)  | (Posted Dec 01, 2021)

Nijeer Parks knows all too well about the injustices committed by police in the United States. In February 2019, the then 31-year-old Black man from New Jersey had a warrant out for his arrest. His alleged crime: stealing snacks from a hotel gift store and fleeing when confronted by police. Upon hearing about the warrant, Parks went to the police station to (he thought) quickly remedy what was a clear case of mistaken identity. At the time of the alleged crime he was 30 miles away, sending money to his partner at a Western Union.

Upon reaching the station however, Parks had no opportunity to demonstrate his innocence. He was handcuffed and thrown into the county prison, where he stayed for 10 days before being granted bail. The prosecutor in the case sought a 20-year sentence—taking into account prior drug convictions and trumped-up charges including shoplifting, resisting arrest and aggravated assault. Taking a plea deal, he was told, would only reduce the sentence to six years. Parks and his partner used all their savings to fight the charges, and fortunately the case was dropped in November 2019, after it became clear the police had no real evidence to back up their charges.

It’s disgraceful enough that anyone could be imprisoned for days just for being accused of a petty crime. But Parks’ case had an additional morbid twist: the decision to arrest him was based solely on the workings of a (faulty) computer algorithm. An image from a fake driver’s licence taken from the suspect at the gift store was scanned by facial recognition software, and it identified Parks as a match. This was the only evidence used for the arrest, which is ridiculous when you compare pictures of Parks and the real suspect, showing few similarities apart from them both being Black and having a beard. And this case is far from unique. Parks is one of at least three Black men in the U.S. who have been wrongly arrested after being identified by facial recognition technology.

Today big tech is just as important for policing operations as the companies that manufacture their weapons, with artificial intelligence (AI) being used by police around the world to streamline criminal investigations, engage in mass surveillance, and, supposedly, predict and stop crimes before they occur.

As with the use of AI in the military and workplaces like Amazon, the kind of AI used by police doesn’t actually involve genuinely intelligent machines. Instead, what AI can achieve for policing is the integration and analysis of huge amounts of data—piecing it together like a puzzle to help direct law enforcement operations. The concerning thing for ordinary people is where this data comes from: the many “digital traces” we all leave behind minute by minute, hour by hour as we go about our daily lives. Thanks to a rapidly growing data brokering industry estimated, in 2019, to be worth US$232 billion, our electronic data is mined without our knowledge, packaged and sold to the highest bidder. Police are one of the industry’s main clients.

One of the industry’s major players is facial recognition company Clearview AI. The New York-based company has harvested, without permission, more than 3 billion pictures from Instagram, LinkedIn, YouTube and Facebook. The company’s algorithms match photos from this database to images loaded by clients into their facial recognition software. A report published in Buzzfeed News found that employees from more than 1,100 U.S. police departments have used Clearview AI. And the technology isn’t just restricted to the US. Here in Australia, although police have previously denied using the technology, a leaked list of Clearview AI customers revealed that the Australian Federal Police, as well as state police in Victoria, Queensland and South Australia, have trialled the company’s technology in recent years.

Other AI companies gain access to much more than just our social media profiles. Palantir, a company that was set up with funding from the U.S. Central Intelligence Agency, is an example. The company was named, ominously, after the powerful palantíri seeing stones from The Lord of the Rings and, since its founding in 2003, has been known mostly for assisting with surveillance operations for the U.S. military. It is widely credited (although this hasn’t been officially confirmed) with tracking down Osama Bin Laden.

In more recent times, Palantir has expanded its client base well beyond the military, and chief among them are the police. The company’s software allows cops to connect data from multiple sources to determine relationships between different individuals, locations and objects. Crime data from police can be combined with anything from birth and death data, phone records, automatic licence plate readings and social media posts to stitch together an intricate social web, showing police who is a relative of whom, who is dating whom, the physical and personal details of these individuals as well as what phones and cars they are using.

In her 2020 book Predict and Surveil: Data, Discretion, and the Future of Policing, Sarah Brayne examines the use of AI by the Los Angeles Police Department (LAPD). She finds that the programs used by police can swiftly produce a list of crime suspects even from the vaguest starting information. A Palantir engineer Brayne interviewed presents her with a hypothetical robbery scenario in which the suspect is “male, average build” with a “black 4-door sedan”, and demonstrates how these scanty details, when entered into the company’s software, generate a list of 13 “matches” with corresponding driver’s license numbers within the space of a minute.

Another disturbing technological frontier is the use of “predictive policing”—algorithms that supposedly can help avert future crimes. AI is very far from achieving the “Precog” type visions of the future seen in the film Minority Report. However, its use in this area is highly problematic nonetheless. Police use predictive policing algorithms to attribute a score or rating to individuals that—based on a range of data points—are claimed to indicate their likelihood of committing future crimes. These scores are then used by police to direct where they put their resources.

This technology isn’t new. A 2014 report from the Police Executive Research Forum showed that 38 percent of U.S. police departments surveyed were using some form of predictive policing technology. In her book Brayne examined the LAPD’s Operation LASER (short for “Los Angeles Strategic Extraction and Restoration”), which was launched in 2011. The operation involved the use of Palantir software to generate crime “hot spot maps” and “chronic offender bulletins” (which look like wanted posters for individuals the software has deemed likely to commit ongoing crime), which are given to police for use on their regular patrols.

In Australia, the New South Wales Police’s Suspect Target Management Plan (STMP) has existed in different incarnations since 1999. The STMP assigns risk scores to people with prior convictions to identify who should be subjected to ongoing surveillance. Victoria Police trialled a predictive policing algorithm targeting youth from 2016 to 2018, but have refused to provide any details about the program.

Some have argued that the use of AI technology could be beneficial in policing, possibly helping to reform the criminal justice system. A 2016 report by the Obama administration, for instance, claimed, “When designed and deployed carefully, data-based methodologies can help law enforcement make decisions based on factors and variables that empirically correlate with risk, rather than on flawed human instincts and prejudices”. Accounts of how these technologies work in practice, however, show there is nothing objective or unbiased about them. If anything, the use of computer algorithms to guide police appears only to entrench and exacerbate existing biased policing practices.

This is in part due to weaknesses of the technology itself. The case of Nijeer Parks is just the tip of the iceberg. When the American Civil Liberties Union ran its own test of Amazon’s facial recognition software Rekognition, images of 28 members of the U.S. Congress were falsely matched with photos from a police mugshot database. And just like a racist cop, these algorithms are more likely to get it wrong for certain people already disproportionately targeted by law enforcement. A 2019 study by the National Institute of Standards and Technology—which tested the accuracy of 189 facial recognition algorithms—showed that, depending on the specific algorithm, they were between 10 and 100 times more likely to spit out a false positive match for Asian and African American faces compared to whites.

The predictive policing programs show similar biases. The LAPD’s Operation LASER provides a clear example. According to Sarah Brayne’s research, the scoring system used to produce their chronic offender bulletins lets existing police biases in at the ground floor. Offenders are identified by the system in part based on previous criminal convictions and in part on how many times they have been stopped by police. So someone who the police have—for whatever reason—been harassing, will be identified by the system as a likely future offender, warranting yet more harassment.

Reports on other predictive policing programs show the targeting of communities already subjected to ongoing police harassment and violence. A 2017 Youth Justice Coalition report on STMP in New South Wales, for instance, found 44 percent of those targeted by the program were Aboriginal people, many of whom had no prior convictions. The report details the experience of a young Aboriginal man identified as James. Despite having no prior convictions in the state, James was listed by the program as being a likely offender. Following this, he was stopped by police on a monthly basis, and in one incident was capsicum sprayed after questioning officers as to why they were stopping him.

The AI technologies used by police are now also being rolled out to enhance surveillance by other repressive state agencies. Palantir has expanded into immigration, with one of its more recently acquired clients being the U.S. Immigration and Customs Enforcement agency (ICE). The use of Palantir’s technologies to spy on undocumented immigrants has led to some of the biggest ICE raids in the country’s history. This includes raids on a series of chicken-processing plants in Mississippi in 2019, in which 680 people were arrested.

The clear evidence that these technologies offer no solution to the entrenched racism and other problems in policing has resulted in some resistance to the use of AI programs. The LAPD had to abandon Operation LASER in 2019 after ongoing pressure from Stop LAPD Spying Coalition activists. In recent years workers at Amazon, Microsoft and Google have demanded the companies stop supplying AI to the police, ICE and the military. And students at the University of California Berkeley stopped Palantir from coming on to their campus to hold recruiting sessions.

These activists are right to resist the use of AI technology by the police and other repressive agencies. Policing is rotten because it is an essential part of a rotten capitalist system—which uses the ongoing surveillance and repression of the working class and poor communities to ensure that the rich stay rich. AI in policing can only contribute to, not solve, the injustices of modern policing. It doesn’t matter whether a cop is armed with a gun or with a computer, we have to take a stand against them all.

Monthly Review does not necessarily adhere to all of the views conveyed in articles republished at MR Online. Our goal is to share a variety of left perspectives that we think our readers will find interesting or useful. —Eds.