Microsoft urges lawmakers to regulate facial recognition technology

Facial recognition systems often have serious shortcomings and are particularly poor at recognizing and differentiating among people with darker skin.
(Dreamstime / TNS)

Microsoft Corp., which has come under fire for a U.S. government contract that was said to involve facial recognition software, said Friday it would more carefully consider contracts in that area and urged lawmakers to regulate the use of such artificial intelligence to prevent abuse.

The company, one of the key makers of software capable of recognizing individual faces, said it would take steps to make those systems less prone to bias; develop new public principles to govern the technology; and move more deliberately to sell its software and expertise in the area. While Microsoft said the technology industry bears responsibility for its products, it argued that government action is also needed.

“The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself,” Microsoft’s president and chief legal officer, Brad Smith, said Friday in a blog post. “And if there are concerns about how a technology will be deployed more broadly across society, the only way to regulate this broad use is for the government to do so. This in fact is what we believe is needed today — a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

Companies such as Microsoft, Alphabet Inc.’s Google and Inc. have been under fire from civil liberties groups and their own employees for selling AI software, particularly for facial recognition, to the U.S. government and local police. It’s somewhat unusual for tech companies to call for their own products to be more heavily regulated, but this year Smith and Harry Shum, Microsoft’s AI chief, wrote a treatise saying AI advances would require new laws.


“While we appreciate that some people today are calling for tech companies to make these decisions — and we recognize a clear need for our own exercise of responsibility — we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic,” Smith wrote.

Facial recognition systems often have serious shortcomings and are particularly poor at recognizing and differentiating among people with darker skin. A February paper from one of Microsoft’s researchers, Timnit Gebru, and Joy Buolamwini of the MIT Media Lab, showed error rates of as much as 35% for systems classifying darker skinned women. Microsoft’s research arm has since worked to correct the issues the researchers identified in its facial recognition software.

As technology evolves rapidly, Microsoft has been advocating for new laws that directly address emerging areas such as cloud computing and AI. The Redmond, Wash., company, citing privacy concerns, fought the U.S. government over what Microsoft said was the government’s application of an outdated communications law to cloud computing, then backed compromise legislation that explicitly covered data privacy and searches in the era of the cloud.

The Microsoft view strikes a somewhat different tone from Amazon’s response to appeals for it to stop selling its facial recognition software to police departments and other government agencies. In a June blog post replying to concerns raised then, Amazon’s AI general manager, Matt Wood, wrote: “There have always been and will always be risks with new technology capabilities. Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation. [Amazon Web Services] takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future.”

Among the issues that government regulation should consider, Microsoft’s Smith wrote, is whether police use of facial recognition software needs human oversight and controls and whether retailers must post visible notice if they use such software in public areas. Other areas for consideration include whether companies should be required to obtain consent before collecting a person’s image and what legal rights apply to people who believe they’ve been misidentified by a facial recognition system.

The technology’s use by law enforcement without rules in place raises concerns about racial disparity and privacy, said Barry Friedman, a New York University law professor who runs the Policing Project, which works to set rules and guidelines for law enforcement.

The Supreme Court ruled that police can’t have long-term access to cellphone data without a warrant or probable cause, and similar restrictions should be applied to their use of facial recognition technology, he said.

“This technology is just rushing on us really fast,” Friedman said. “If we don’t get ahead of it, we won’t be able to put the toothpaste back in the tube.”

Last month, Microsoft briefly removed references on its website to a contract it had secured with U.S. Immigration and Customs Enforcement after an outbreak of complaints about the software maker selling its wares to an agency involved in separating immigrant parents and children at the U.S.-Mexico border. The blog post was later restored.

Although Microsoft later said its contract with ICE was for putting older email and worker collaboration systems in the cloud, the blog post mentioned the possibility of ICE using Microsoft’s facial recognition tools. Hundreds of Microsoft employees signed a petition demanding Microsoft stop working with the agency.

Amazon has come under fire for selling its Rekognition AI software to local police departments, with the ACLU demanding that Amazon stop letting governments use the technology.

At Google, employees revolted over the company’s work on Project Maven, a Defense Department initiative to integrate AI tools with drone surveillance. Last month Chief Executive Sundar Pichai released a set of principles under which Google pledged not to use its powerful artificial intelligence for weapons, illegal surveillance and technologies that cause “overall harm.” The company said it would still work with the military.

Microsoft has created an AI ethics board to examine these issues and says it has turned down contracts to sell its software to certain customers, without providing details. The new AI board has urged a slower, more deliberate approach to selling the software, Smith said.