Microsoft Corp. called for new legislation to govern artificial intelligence software for recognizing faces, advocating for human review and oversight of the technology in crucial cases.
“This includes where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer’s personal freedom or privacy may be impinged,” Microsoft’s president and chief legal officer, Brad Smith, wrote in a blog post published Thursday in conjunction with a speech on the topic at the Brookings Institution think tank.
Sellers of the technology must “recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers,” he said. Microsoft itself sells such software.
Smith also wants laws to require sellers of the products to clearly explain what they do and open up their services to testing by outside parties for accuracy and bias. Earlier on Thursday, advocacy group AI Now called for greater regulation and regular audits of AI tools used by governments.
There’s rising concern over the use of facial recognition software by law enforcement, border security, the military and more. Research has shown that some of the most popular products make mistakes and perform worse on people with darker skin. Microsoft, Amazon.com Inc. and Alphabet Inc.’s Google have also faced protests from employees and advocacy groups over the idea of selling AI software to government agencies or the police. In July, Smith called on lawmakers to regulate facial recognition software without offering specific laws he’d like to see.
Smith doesn’t seem to share protesters’ concerns about selling the technology to U.S. government agencies. On Saturday, he said at a defense conference that Microsoft would “provide the U.S. military with access to the best technology … all the technology we create. Full stop.”
But he did express other concerns about what the tech could do. “The facial recognition genie, so to speak, is just emerging from the bottle,” Smith said in his post Thursday. “Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.”
Microsoft thinks current facial recognition technology increases the risk of biased outcomes, can lead to new intrusions into privacy and can encroach on democratic freedoms if used for mass government surveillance, he added.
He wants initial legislation, even if it doesn’t answer all questions, rather than waiting for more comprehensive legal approaches years from now. Smith said in an interview that he has spoken to state and federal legislators and "there is strong interest." He hopes laws can be in place sometime next year.
Microsoft is also asking for laws that require companies that use facial recognition technology in public areas to place “conspicuous notice” to make customers and passersby aware, Smith said.
In calling for third-party audits of AI products, Smith took a swipe at rivals, without naming them. “When important advocacy organizations have tried to perform tests, they’ve almost immediately been met by rejections and criticism by some providers who claim that the testing is deficient,” he said.
In July, Amazon responded to an ACLU critique of its Rekognition service by saying the advocacy group used incorrect settings for the test.
Microsoft plans to clearly document what its AI software can and cannot do, while explicitly prohibiting use of the products for unlawful discrimination. The company also will not provide the technology to customers that it believes could undermine democracies, Smith said.
“We must ensure that the year 2024 doesn’t look like a page from the novel ‘1984,’ ” Smith wrote.
Times staff writer Samantha Masunaga contributed to this report.