AI is automating injustice in American policing

The US police are increasingly relying on artificial intelligence (AI) tools to fight crime, but critics argue that these machines are often used to perpetuate injustice and automate existing biases in policing. The use of AI facial recognition technology has led to numerous false leads, with many innocent people being arrested miles away from the alleged crime scene. These incidents disproportionately target people of color.

While proponents of AI argue that it provides an objective authority, critics contend that this ignores the fact that AI knowledge is based on data learned from the past and predicts future events without human judgment. This can exacerbate existing biases in policing, particularly against working-class Black and brown communities. The automation of surveillance can also lead to a lack of accountability, with police forces deflecting accusations of targeting specific groups by citing the objective dictates of AI.

Some AI tools are being used to justify deploying more officers in already militarized areas, further entrenching poverty and inequality. Transcription tools like Axon's "Draft One" have been criticized for introducing cognitive laziness into the legal record, with potentially misleading or inaccurate information becoming permanent.

A recent audit found that only 8-20% of alerts from ShotSpotter, a gunshot activity sensor used by the New York City Police Department, actually matched with real shootings. However, the company behind ShotSpotter claims an accuracy rate of 97%, which is disputed by critics who point to the lack of physical evidence.

Despite these concerns, many police departments are eager to adopt AI tools as a way to claim modernity and efficiency. Some companies like Flock Safety have made millions of dollars from the ballooning demand for mass surveillance tools.

Critics argue that the lack of transparency around AI acquisitions and contracts between police departments and private capital is exacerbating the problem. The NYPD has been criticized for dragging its heels in releasing information about its surveillance arsenal, despite passing legislation requiring greater oversight.

The use of AI raises fundamental questions about corporate intellectual property rights versus citizen's rights to privacy and due process. Critics argue that the "black box" nature of AI systems creates a conflict between private interests and public trust, with sensitive personal information being outsourced to companies whose obligation is to shareholders, not the public.

Ultimately, some critics believe that relying on advanced technology to solve complex social problems like policing is a false promise that cannibalizes resources for more effective solutions, such as healthcare, affordable housing, education. As one critic noted, "the idea of modernity and efficiency has been used to justify a lot of expensive promises that don't deliver."
 
I'm getting so frustrated with this AI thing in policing ๐Ÿคฏ. It's like they're trying to solve complex social problems but all it does is perpetuate more injustice ๐Ÿšซ. These machines are just reflecting the biases we already have, and now they're being used to target people of color even more ๐Ÿ•ณ๏ธ. And don't even get me started on the lack of transparency around these contracts between police and private companies ๐Ÿ’ธ. It's like they're outsourcing our rights as citizens to make a quick buck ๐Ÿ’ผ. We need to be careful here, AI might seem modern and efficient but it's not solving anything when all it does is automate more problems ๐Ÿค–.
 
AI in policing is hella problematic ๐Ÿค–๐Ÿšจ I mean, we're literally using machines to make decisions on our lives and it's all based on data from the past... like what even is that? ๐Ÿค” And have you seen those AI facial recognition tools? Total mess! ๐Ÿคก Innocent people getting arrested left and right, just because of their skin tone or where they live. It's like, can't we do better than this?

And don't even get me started on the surveillance game ๐Ÿ“บ๐Ÿšซ Companies making millions off our data, while we're still fighting for basic rights... it's wild. The NYPD is all like "we need more tech" and I'm over here like "hold up, can you please just give us some actual answers?" ๐Ÿ™„

It's not even about the accuracy of AI tools, it's about who gets affected by them and why ๐Ÿค We're losing sight of what's really important here. Let's focus on solving real problems, like poverty and inequality, instead of outsourcing them to machines ๐Ÿ’ธ
 
I'm not sure I agree that AI tools can never be useful in policing... ๐Ÿค” but at the same time, it's pretty clear that they're being misused in so many ways ๐Ÿšจ. Like, on one hand, you've got critics saying that AI is just perpetuating existing biases and targeting already marginalized communities, which I think has some truth to it ๐Ÿ“Š. But then again, proponents of AI are saying that it's all about objectivity and fairness... and isn't that a pretty lofty goal to aim for? ๐Ÿ˜…

I don't know, maybe I'm just not seeing the bigger picture here... or maybe I am? ๐Ÿ˜• Either way, I think we need to be super careful when implementing new tech in policing, 'cause the stakes are so high ๐Ÿคฏ. And yeah, corporate interests should definitely be called out for profiting off of this stuff ๐Ÿ’ธ.

But what if AI could actually help address some of these issues... like providing more resources for underserved communities? ๐Ÿค That would be a game-changer, right? ๐Ÿ™Œ (Or would it just create new problems down the line?) ๐Ÿค”
 
AI tools in policing are super sketchy ๐Ÿค–๐Ÿ’”. I mean, yeah we need to keep our communities safe but this is just perpetuating more problems. People of color are already at a disadvantage, and now AI is just making it worse by giving the police an excuse to target them even more. And what's with all these 'accidents' when it comes to accuracy rates? 8-20%? That's crazy! ๐Ÿคฏ It's like they're just hoping for the best instead of actually getting the facts straight. And don't even get me started on the lack of transparency around how all this works... it's like a big game of cat and mouse where innocent people are getting hurt in the process ๐Ÿ˜•. We need to rethink our approach to crime-solving, we can do better than just relying on technology that's only going to make things worse ๐Ÿค.
 
I'm low-key obsessed with the fact that all these new AI tools are just gonna make policing more messed up ๐Ÿค–. Like, we're already seeing innocent people getting caught up in false leads, now we're gonna throw in some fancy machine learning algorithm and it's gonna be even worse? The thing is, most of these AI systems are trained on data that's already super biased, so what's the point of even trying to use them? We're just gonna end up perpetuating all the systemic issues that already exist.

And don't even get me started on these companies making millions off of surveillance tools. It's like they're profiting off our fear and lack of trust in law enforcement ๐Ÿค‘. The NYPD is being super shady about their surveillance arsenal too, it's like they're trying to hide something. I'm all for modernizing policing and finding new ways to solve problems, but this AI stuff feels like a bunch of hype to me ๐Ÿคฆโ€โ™€๏ธ. We should be investing in real solutions that actually help people, not just lining the pockets of corporate execs ๐Ÿ‘Š.
 
The use of AI in policing is really problematic ๐Ÿค”. I mean, on one hand, it's cool that they're trying to be more efficient and modern, but on the other hand, it feels like they're just perpetuating existing biases and screwing over innocent people ๐Ÿšซ. Like, who needs AI-powered facial recognition when you've got good old-fashioned human judgment? And don't even get me started on how much money these companies are making off of mass surveillance tools ๐Ÿค‘. It's like they're more interested in lining their pockets than actually helping communities.

I'm also worried about the lack of transparency around all this. I mean, who gets to decide what AI systems are used and for what purposes? And who's accountable when it goes wrong? ๐Ÿคฏ It feels like we're just handing over our rights to privacy and due process to corporations that have no real obligation to us.

I'm not saying technology can't be a tool for good, but we need to be super careful about how we use it. We need to make sure we're prioritizing people's rights and communities over corporate profits ๐Ÿค.
 
AI in policing is like trying to use GPS on a Vespa - it sounds cool, but just ends up getting you lost in the streets ๐Ÿšจ๐Ÿ’ป. I mean, who thought it was a good idea to give machines with a 97% accuracy rate (yeah right ๐Ÿ˜’) the power to identify bad guys? It's like relying on Siri to get me out of a tight spot - not gonna end well.

And don't even get me started on the cognitive laziness introduced by transcription tools like Axon's "Draft One". Who needs nuance when you can just slap a timestamp and call it a day, right? ๐Ÿคฆโ€โ™‚๏ธ The real problem is that these AI tools are being used to justify more officers in already militarized areas - because nothing says 'fair policing' like throwing more bodies at the problem ๐Ÿš”.

Transparency around AI acquisitions and contracts is basically non-existent, which is like playing Russian roulette with our collective trust. And what's worse, it's all about corporate profits over public interest ๐Ÿ’ธ. The lack of accountability is just a nice way of saying 'we've got a free pass to mess up'.
 
I'm getting really worried about these AI tools being used by the police ๐Ÿคฏ... they're basically perpetuating systemic racism and targeting people who already get a raw deal from society. I mean, what's next? AI-powered traffic cops to give fines to parents who can't afford to pay for their kid's speeding ticket? ๐Ÿš—๐Ÿ˜ฑ It's like we're trading one injustice for another.

And don't even get me started on the lack of transparency around these contracts... it's like our public officials are more interested in lining their pockets than keeping us safe and informed. I know my little one is gonna be old enough to ask about this stuff soon, and I wanna know if we're truly putting them at risk with all this AI surveillance ๐Ÿค”.

Can't we focus on solving real problems like poverty, education, and healthcare instead of chasing some tech "solutions" that are just gonna make things worse? ๐Ÿคทโ€โ™€๏ธ
 
can we talk about how this reliance on AI is gonna be super problematic when it comes to addressing systemic issues? like, AI is just perpetuating the same biases we're trying to solve for. I'm not saying we shouldn't use tech to improve policing, but it's gotta be done in a way that prioritizes people over profits ๐Ÿค”. and can someone explain how Axon's transcription tool doesn't introduce cognitive laziness into the legal record? like, isn't that just gonna lead to more mistakes being made? also, what's up with ShotSpotter's accuracy rate of 97% if only 8-20% of alerts are actually real shootings? does that sound right to anyone? ๐Ÿคทโ€โ™‚๏ธ
 
Ugh, AI in law enforcement is like a total double-edged sword ๐Ÿค”๐Ÿšจ. On one hand, it's crazy how far they've come with facial recognition tech - I mean, who would've thought that AI could be used to identify people from miles away? But on the other hand, it's basically just perpetuating existing biases and automation of surveillance, which is super problematic ๐Ÿค•.

And don't even get me started on companies like Flock Safety making bank off this nonsense ๐Ÿ’ธ. I mean, millions of dollars for mass surveillance tools? That's just crazy town ๐Ÿš€. And what really gets my goat is that police departments are more worried about being modern and efficient than actually addressing the root causes of crime and inequality.

The whole thing just feels like a corporate PR stunt to me ๐Ÿ“ฃ. "Hey, let's use AI to solve our problems!" Yeah, because that's exactly how it works... in theory ๐Ÿคทโ€โ™‚๏ธ. But what about all those innocent people getting arrested due to false leads? What about the lack of transparency around AI acquisitions and contracts? It's like they're trying to sell us a bill of goods that just doesn't add up ๐Ÿ“Š.

I'm all for using technology to solve problems, but this whole thing feels like a Band-Aid solution at best ๐Ÿ’‰. We need to be investing in real solutions like healthcare, affordable housing, education - not just some fancy AI tool that's just gonna perpetuate existing biases and inequalities ๐Ÿคฆโ€โ™‚๏ธ.
 
ai in policing is a huge red flag ๐Ÿšจ - it's like they're trying to solve complex social issues with magic pills ๐Ÿคฏ. we all know how those usually turn out ๐Ÿ˜’. the fact that these machines are based on past data and predict future events without human judgment just raises more questions about accountability โš–๏ธ. what's even crazier is that some companies are making bank off this ๐Ÿ’ธ - it's like they're profiting from people's rights being trampled ๐Ÿคทโ€โ™‚๏ธ. transparency is key, but it seems like police departments are too scared to share the truth ๐Ÿค. we need to be careful not to confuse corporate interests with what's best for society ๐Ÿ“ˆ.
 
AI is just making things worse ๐Ÿค–๐Ÿ˜ก. It's like they're trying to automate bad policing instead of fixing the root problems. The fact that it's perpetuating biases against ppl of color is a major red flag ๐Ÿ”ด๐Ÿšซ. And what's with all these companies making millions off surveillance tools? That's just exploiting people for profit ๐Ÿ’ธ๐Ÿ˜’. The whole thing feels like a black box that we can't even see into ๐Ÿคฏ. We need more transparency, not less ๐Ÿ•ต๏ธโ€โ™€๏ธ.
 
omg this is so messed up! AI tools in policing are literally creating more problems than they're solving... like, have you seen those facial recognition errors? innocent people getting arrested miles away from the crime scene? it's crazy! and it's not just the technology itself, but also how police departments are using it to justify deploying more officers in already struggling communities. it's like, we need to focus on addressing poverty and inequality, not just throwing more surveillance tools at the problem
 
๐Ÿค– AI in policing is a whole no-go ๐Ÿšซ๐Ÿ’”. Those machines are super biased ๐Ÿ˜ฌ and cause so much harm to innocent ppl ๐Ÿ™…โ€โ™€๏ธ. Innocent people get arrested miles away from the crime scene ๐Ÿ“๐Ÿ˜ฑ. We need more transparency ๐Ÿ’ก around AI acquisitions & contracts, stat! ๐Ÿ’ธ Companies like Flock Safety making millions off mass surveillance tools is straight up creepy ๐Ÿ•ต๏ธโ€โ™‚๏ธ๐Ÿ’”. What's the real problem here? ๐Ÿค” Not just policing, but income inequality ๐Ÿ‘ฅ and poverty ๐ŸŒฎ. We should invest in healthcare, housing & education instead ๐Ÿฅ๐Ÿ‘ซ๐Ÿ“š. AI is not the answer ๐Ÿ’”
 
๐Ÿš” AI facial recognition tech is like trying to solve a crime with a blindfold on... ๐Ÿ™„ It's only going to perpetuate existing biases. We need human judgment, not algorithms, when it comes to policing communities! ๐Ÿ‘ฎโ€โ™‚๏ธ๐Ÿ•ต๏ธโ€โ™€๏ธ
 
I'm so worried about this AI thingy being used in policing ๐Ÿค–๐Ÿ˜Ÿ. Like, I get that technology can be super helpful, but we gotta make sure it's not perpetuating the same old problems we're trying to solve ๐Ÿšซ. The fact that innocent people are getting arrested just because of some bot misidentifying them is just crazy ๐Ÿ˜จ.

And what really gets my goat is how some companies are making millions off this stuff without even being transparent about how they're doing it ๐Ÿ’ธ๐Ÿค. I mean, shouldn't we be focusing on actual crime-fighting strategies that actually work? It feels like we're just throwing money at the problem and hoping for the best ๐Ÿค‘.

I'm all for innovation and progress, but not if it comes at the cost of our civil liberties ๐Ÿ”’๐Ÿ’”. We need to start thinking about how we can use technology to uplift communities, not just control them ๐Ÿ’ช. And can we please, for the love of all things good, get some better data on these AI systems? This "black box" nonsense is giving me the heebie-jeebies ๐Ÿคฏ!
 
OMG, AI facial recog is like totally outta control right now ๐Ÿคฏ! I mean, I get it, crime needs to be fought, but can't we use our common sense instead of these fancy machine learning algorithms? ๐Ÿ™„ They're already making innocent people get arrested and it's all based on flawed data that's been trained on a bunch of biases. Like, who thought this was a good idea? ๐Ÿค” And don't even get me started on the lack of transparency around how police departments are using these tools... it's like they're trying to hide something ๐Ÿคซ. We need to rethink our reliance on tech and focus on actual community policing instead ๐Ÿ’ก.
 
AI tools are so messed up ๐Ÿค–๐Ÿ’ป. They can't even get facial recognition right but the cops are like totally trusting them. It's like they're trying to solve a crime with a magic 8 ball ๐ŸŽ‚. We need more transparency and oversight, not less ๐Ÿ’”. And what about accountability? If an AI tool makes a mistake, who gets held responsible? The cop or the company that sold it? ๐Ÿ˜’

And don't even get me started on how these tools are being used to justify deploying more officers in already marginalized communities ๐Ÿšซ. It's like they're perpetuating systemic racism with tech ๐Ÿ’ธ. We need to prioritize community safety, not corporate profits ๐Ÿ‘ฎโ€โ™€๏ธ.

I'm all for innovation and progress, but we gotta make sure it serves the people, not just the powerful ๐Ÿค. Transparency is key ๐Ÿ”“. Let's get back to basics: human judgment, community engagement, and a real commitment to justice ๐Ÿ’ช
 
Back
Top