The US police are increasingly relying on artificial intelligence (AI) tools to fight crime, but critics argue that these machines are often used to perpetuate injustice and automate existing biases in policing. The use of AI facial recognition technology has led to numerous false leads, with many innocent people being arrested miles away from the alleged crime scene. These incidents disproportionately target people of color.
While proponents of AI argue that it provides an objective authority, critics contend that this ignores the fact that AI knowledge is based on data learned from the past and predicts future events without human judgment. This can exacerbate existing biases in policing, particularly against working-class Black and brown communities. The automation of surveillance can also lead to a lack of accountability, with police forces deflecting accusations of targeting specific groups by citing the objective dictates of AI.
Some AI tools are being used to justify deploying more officers in already militarized areas, further entrenching poverty and inequality. Transcription tools like Axon's "Draft One" have been criticized for introducing cognitive laziness into the legal record, with potentially misleading or inaccurate information becoming permanent.
A recent audit found that only 8-20% of alerts from ShotSpotter, a gunshot activity sensor used by the New York City Police Department, actually matched with real shootings. However, the company behind ShotSpotter claims an accuracy rate of 97%, which is disputed by critics who point to the lack of physical evidence.
Despite these concerns, many police departments are eager to adopt AI tools as a way to claim modernity and efficiency. Some companies like Flock Safety have made millions of dollars from the ballooning demand for mass surveillance tools.
Critics argue that the lack of transparency around AI acquisitions and contracts between police departments and private capital is exacerbating the problem. The NYPD has been criticized for dragging its heels in releasing information about its surveillance arsenal, despite passing legislation requiring greater oversight.
The use of AI raises fundamental questions about corporate intellectual property rights versus citizen's rights to privacy and due process. Critics argue that the "black box" nature of AI systems creates a conflict between private interests and public trust, with sensitive personal information being outsourced to companies whose obligation is to shareholders, not the public.
Ultimately, some critics believe that relying on advanced technology to solve complex social problems like policing is a false promise that cannibalizes resources for more effective solutions, such as healthcare, affordable housing, education. As one critic noted, "the idea of modernity and efficiency has been used to justify a lot of expensive promises that don't deliver."
While proponents of AI argue that it provides an objective authority, critics contend that this ignores the fact that AI knowledge is based on data learned from the past and predicts future events without human judgment. This can exacerbate existing biases in policing, particularly against working-class Black and brown communities. The automation of surveillance can also lead to a lack of accountability, with police forces deflecting accusations of targeting specific groups by citing the objective dictates of AI.
Some AI tools are being used to justify deploying more officers in already militarized areas, further entrenching poverty and inequality. Transcription tools like Axon's "Draft One" have been criticized for introducing cognitive laziness into the legal record, with potentially misleading or inaccurate information becoming permanent.
A recent audit found that only 8-20% of alerts from ShotSpotter, a gunshot activity sensor used by the New York City Police Department, actually matched with real shootings. However, the company behind ShotSpotter claims an accuracy rate of 97%, which is disputed by critics who point to the lack of physical evidence.
Despite these concerns, many police departments are eager to adopt AI tools as a way to claim modernity and efficiency. Some companies like Flock Safety have made millions of dollars from the ballooning demand for mass surveillance tools.
Critics argue that the lack of transparency around AI acquisitions and contracts between police departments and private capital is exacerbating the problem. The NYPD has been criticized for dragging its heels in releasing information about its surveillance arsenal, despite passing legislation requiring greater oversight.
The use of AI raises fundamental questions about corporate intellectual property rights versus citizen's rights to privacy and due process. Critics argue that the "black box" nature of AI systems creates a conflict between private interests and public trust, with sensitive personal information being outsourced to companies whose obligation is to shareholders, not the public.
Ultimately, some critics believe that relying on advanced technology to solve complex social problems like policing is a false promise that cannibalizes resources for more effective solutions, such as healthcare, affordable housing, education. As one critic noted, "the idea of modernity and efficiency has been used to justify a lot of expensive promises that don't deliver."