Williams’ wrongful arrest, which was first reported by The New York Times in August 2020, was based on a mismatch of the Detroit Police Department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also black men and both have taken legal action to try to remedy the situation.
Now Williams is following their path and going further – not only in suing Detroit Police for their wrongful arrest, but in trying to get the technology banned.
Tuesday, the ACLU and the University of Michigan Civil Rights Litigation Initiative filed a complaint on behalf of Williams, alleging that his arrest violated Williams’ Fourth Amendment rights and was in violation of Michigan Civil Rights Act.
The lawsuit calls for compensation, more transparency on the use of facial recognition and that the Detroit Police Department stop using any facial recognition technology, directly or indirectly.
What the trial says
the documents filed Tuesday state the case. In March 2019, the DPD released a grainy photo of a black man with a red cap from Shinola’s surveillance video via his facial recognition system, made by a company called DataWorks Plus. The system returned a match with an old photo of Williams’ driver’s license. Investigators then included William’s license photo in a photo queue, and Shinola’s security guard identified William as the thief. Officers obtained a warrant, which requires multiple signatures from the department leadership, and Williams was arrested.
The complaint argues that Williams’ fake arrest was a direct result of the facial recognition system and that “this wrongful arrest and jail case illustrates the serious harm caused by the misuse of facial recognition technology and the use this technology. ”
The case contains four counts, three of which focus on the lack of probable cause for the arrest while one count focuses on racial disparities in facial recognition. “By employing technology which has been empirically proven to misidentify black people at rates much higher than other groups of people,” he asserts, “the DPD has denied Mr. Williams full and equal enjoyment. of the services, privileges and advantages of the Detroit Police Department because of his race or color.
Difficulties with facial recognition to identify people with darker skin are well documented. Following the murder of George Floyd in Minneapolis in 2020, some cities and states announced bans and moratoria on police use of facial recognition. But many others, including Detroit, have continued to use it despite growing concerns.
“Rely on inferior images”
When MIT Technology magazine spoke to Williams’ ACLU attorney Phil Mayor last year, it pointed out that issues of racism within U.S. law enforcement made the use of facial recognition even more worrying.
“This is not a situation where only one bad actor,” said the mayor. “This is a situation where we have a criminal justice system that is extremely quick to bring charges and extremely slow to protect the rights of people, especially when it comes to people of color.”
Eric Williams, a senior attorney with the Economic Equity Practice in Detroit, says cameras have many technological limitations, including being hard-coded with color ranges to recognize skin tone and often simply cannot process. darker skin.
“I think every black person in the country has had the experience of being in a photo and the photo appears lighter or darker,” says Williams, who is an ACLU member of the Michigan Lawyers Committee, but don’t work. on the Robert Williams case. “Lighting is one of the main factors in the quality of an image. So the fact that law enforcement relies, to some extent … on really poor images is problematic. ”
There have been cases that have challenged biased algorithms and artificial intelligence technologies on the basis of race. Facebook, for example, suffered a massive civil rights audit after its targeted advertising algorithms were found to serve ads on the basis of race, gender and religion. YouTube has been sued a class action lawsuit brought by black designers who claimed that it is the AI systems that profile users and censor and discriminate content based on race. YouTube has also been sued by LGBTQ + creators who have said that content moderation systems pointed out the words “gay” and “lesbian”.
Some experts say it was only a matter of time before the use of biased technology at a large institution like the police faced legal challenges.