ICE Agents Used ChatGPT to Write Use-of-Force Reports, Judge Says
A 223-page opinion from a US District Judge has exposed widespread abuse of power by Immigration and Customs Enforcement (ICE) agents in Chicago. The ruling criticized the agency's actions during "Operation Midway Blitz," which resulted in over 3,300 people being arrested and more than 600 held in ICE custody.
The report was meant to document violent conflicts with protesters and citizens, but Judge Sara Ellis deemed it unreliable due to inconsistencies between body-worn camera footage and written reports. In a shocking twist, she revealed that at least one agent used ChatGPT, the AI-powered language model, to compile a narrative for the report. The officer submitted the output from ChatGPT as the final product, despite being provided with extremely limited information.
This egregious misuse of technology undermines the credibility of ICE agents and may explain the inaccuracies in their reports when compared to body-worn camera footage. "To the extent that agents use ChatGPT to create their use of force reports," Judge Ellis wrote, "this further undermines their credibility."
The Department of Homeland Security (DHS) has not publicly disclosed a clear policy on using generative AI tools to create reports. However, it does have a dedicated page discussing AI adoption within the agency. DHS had deployed its own chatbot to aid agents in completing daily tasks after conducting test runs with commercially available chatbots, including ChatGPT.
However, there is no indication that the agency's internal tool was used by the officer filling out the report. The footage suggests that an individual used ChatGPT directly and uploaded the information to complete the report. This raises serious concerns about AI use in law enforcement, as one expert described it as "the worst-case scenario."
A 223-page opinion from a US District Judge has exposed widespread abuse of power by Immigration and Customs Enforcement (ICE) agents in Chicago. The ruling criticized the agency's actions during "Operation Midway Blitz," which resulted in over 3,300 people being arrested and more than 600 held in ICE custody.
The report was meant to document violent conflicts with protesters and citizens, but Judge Sara Ellis deemed it unreliable due to inconsistencies between body-worn camera footage and written reports. In a shocking twist, she revealed that at least one agent used ChatGPT, the AI-powered language model, to compile a narrative for the report. The officer submitted the output from ChatGPT as the final product, despite being provided with extremely limited information.
This egregious misuse of technology undermines the credibility of ICE agents and may explain the inaccuracies in their reports when compared to body-worn camera footage. "To the extent that agents use ChatGPT to create their use of force reports," Judge Ellis wrote, "this further undermines their credibility."
The Department of Homeland Security (DHS) has not publicly disclosed a clear policy on using generative AI tools to create reports. However, it does have a dedicated page discussing AI adoption within the agency. DHS had deployed its own chatbot to aid agents in completing daily tasks after conducting test runs with commercially available chatbots, including ChatGPT.
However, there is no indication that the agency's internal tool was used by the officer filling out the report. The footage suggests that an individual used ChatGPT directly and uploaded the information to complete the report. This raises serious concerns about AI use in law enforcement, as one expert described it as "the worst-case scenario."