Home / Business / IBM dispenses with facial recognition products and condemns racially biased surveillance: NPR

IBM dispenses with facial recognition products and condemns racially biased surveillance: NPR



IBM announced this week that it would stop selling its facial recognition technology to customers, including law enforcement agencies. The move caused other technology companies like Amazon and Microsoft to do the same.

Richard Drew / AP


Hide label

Switch label

Richard Drew / AP

IBM announced this week that it would stop selling its facial recognition technology to customers, including law enforcement agencies. The move caused other technology companies like Amazon and Microsoft to do the same.

Richard Drew / AP

IBM will no longer provide face-to-face technology to police forces for mass surveillance and racial profiling, wrote Arvind Krishna, general manager of IBM, in a letter to Congress.

Krishna wrote that such technologies could be used by the police to violate “fundamental human rights and freedoms” and that this would be inconsistent with the company’s values.

“We believe it is now time to start a national dialogue on whether and how facial recognition technologies should be used by national law enforcement agencies,” said Krishna.

The nationwide demonstrations after the police killed George Floyd have already led to changes in police departments across the country – about the use of violence policies, police misconduct and police contracts.

The moment of settling on the country’s relationship with law enforcement is also coming as researchers and technology scientists continue to use artificial intelligence to warn of facial recognition software, especially as some of the data-driven systems have been shown to be racially biased. For example, the MIT Media Lab found that the technology is often less successful in identifying the gender of faces with darker skin tones, which can lead to misidentifications.

“This is a welcome recognition that facial recognition technologies, particularly those used by the police, have been used to undermine human rights, particularly to harm blacks, indigenous people and other colored people,” said Joy Buolamwini, who led the technology the MIT study and is the founder of the Algorithmic Justice League.

Nate Freed Wessler, an attorney for ACLU’s speech, privacy, and technology project, said while encouraged by the news from IBM, other large technology companies are still committed to the software.

“It is good that IBM has taken this step, but it cannot be the only company,” Freed Wessler told NPR. “Amazon, Microsoft and other companies are trying to make a lot of money by selling these dangerous, dubious tools to law enforcement agencies. That should stop immediately.”

At IBM, Krishna, who took over management in April, noted the technology’s risk of achieving discriminatory results in his announcement to drop the all-purpose facial recognition software.

“Artificial intelligence is a powerful tool that can help law enforcement agencies protect citizens,” he wrote to Congress Democrats, who introduced a police reform law on Monday that would prohibit federal law enforcement agencies from using facial recognition technologies. “However, providers and users of AI systems have a shared responsibility to test AI for law enforcement bias and peculiarities, and to test and report such bias tests.”

IBM had tested facial recognition software with the New York City Police Department, but acceptance by other law enforcement agencies appears to be limited. Analysts who track IBM have found that the company’s facial recognition didn’t bring in much revenue, suggesting that the decision might make economic sense.

Surveillance technology critics who have asked Microsoft and Amazon to make similar commitments say that relying on data mining tools to make public security decisions could put citizens at risk.

“Face detection systems have much higher failure rates when they come from colored people, women and younger people, which can cause great harm to the police,” said Freed Wessler of ACLU. “Whether you use violence, whether you arrest someone, whether you stop someone on the street.”

Amazon is a major provider of facial recognition software. The Rekognition product was used by local law enforcement agencies in Florida and Oregon.

In 2018, the ACLU found that the software incorrectly identified 28 Congress members as people arrested for crimes.

And Buolamwini found that when the photos of several prominent black women, including Oprah and Michelle Obama, were scanned using Amazon technology, the system incorrectly indicated that they were men.

Amazon has publicly defended its facial recognition software, declaring that studies that question its accuracy contain misconceptions about how the technology works.

“We know that face recognition technology poses risks when used irresponsibly,” wrote Matt Wood, general manager for artificial intelligence at Amazon Web Services. “However, we remain optimistic about the good that this technology will bring to society and are already seeing meaningful evidence with facial recognition that will help prevent child trafficking, match missing children with parents, and ensure better payment authentication or reduce credit card fraud. “

Amazon has not returned a request to comment on IBM’s decision to move away from facial recognition software.

Microsoft, which uses facial recognition technology through its Azure cloud computing services, has also not returned a request for comment.

Big Tech’s use of facial recognition has led to controversy and legal action for non-law enforcement use.

In January, Facebook agreed to pay half a billion dollars to settle a class action lawsuit for allegedly violating Illinois’s privacy laws when using facial recognition technologies that used facial adjustment software to guess who appeared in photos posted on the social network were.


Source link