Microsoft restricts access to controversial AI facial recognition technology – SiliconANGLE News

Science & Technology

UPDATED 19:52 EDT / JUNE 21 2022
by Mike Wheatley
Microsoft Corp. says it will phase out access to a number of its artificial intelligence-powered facial recognition tools, including a service that’s designed to identify the emotions people exhibit based on videos and images.
The company announced the decision today as it published a 27-page “Responsible AI Standard” that explains its goals with regard to equitable and trustworthy AI. To meet these standards, Microsoft has chosen to limit access to the facial recognition tools available through its AzureFace API, Computer Vision and Video Indexer services.
New users will no longer have access to those features, while existing customers will have to stop using them by the end of the year, Microsoft said.
Facial recognition technology has become a major concern for civil rights and privacy groups. Previous studies have demonstrated that the technology is far from perfect, often misidentifying female subjects and those with darker skin at a disproportionate rate. This can lead to big implications when AI is used to identify criminal suspects and in other surveillance situations.
In particular, the use of AI tools that can detect a person’s emotions has become especially controversial. Earlier this year, when Zoom Video Communications Inc. announced it was considering adding “emotion AI” features, the privacy group Fight for the Future responded by launching a campaign urging it not to do so, over concerns the tech could be misused.
The controversy around facial recognition has been taken seriously by tech firms, with both Amazon Web Services Inc. and Facebook’s parent company Meta Platforms Inc. scaling back their use of such tools.
In a blog post, Microsoft’s chief responsible AI officer Natasha Crampton said the company has recognized that for AI systems to be trustworthy, they must be appropriate solutions for the problems they’re designed to solve. Facial recognition has been deemed inappropriate, and Microsoft will retire Azure services that infer “emotional states and identity attributes such as gender, age, smiles, facial hair, hair and makeup,” Crampton said.
“The potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognized harms associated with these systems,” she continued. “[Our laws] have not caught up with AI’s unique risks or society’s needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act.”
Analysts were divided on whether or not Microsoft’s decision is a good one. Charles King of Pund-IT Inc. told SiliconANGLE that in addition to the controversy, AI profiling tools often don’t work as well as intended and seldom deliver the results claimed by their creators. “It’s also important to note that with people of color, including refugees seeking better lives, coming under attack in so many places, the possibility of profiling tools being misused is very high,” King added. “So I believe Microsoft’s decision to restrict their use makes eminent sense.”
However, Rob Enderle of the Enderle Group said it was disappointing to see Microsoft back away from facial recognition, given that such tools have come a long way from the early days when many mistakes were made. He said the negative publicity around facial recognition has forced big companies to stay away from the space.
“[AI-based facial recognition] is too valuable for catching criminals, terrorists and spies, so it’s not like government agencies will stop using them,” Enderle said. “However, with Microsoft stepping back it means they’ll end up using tools from specialist defense companies or foreign providers that likely won’t work as well and lack the same kinds of controls. The genie is out of the bottle on this one; efforts to kill facial recognition will only make it less likely that society doesn’t benefit from it.”
Microsoft said that its responsible AI standards don’t stop at facial recognition. It will also apply them to Azure AI’s Custom Neural Voice, a speech-to-text service that’s used to power transcription tools. The company explained that it took steps to improve this software in light of a March 2020 study that found higher error rates when it was used by African American and Black communities.
Click here to join the free and open Startup Showcase event.
We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.
Click here to join the free and open Startup Showcase event.
HashiCorp debuts new tool to detect infrastructure configuration drift
Security issue in smart jacuzzi software exposes user data
CyberProof brings managed EDR capabilities to Microsoft Security Services for Enterprise
‘Icefall’ vulnerabilities in leading operational technology products open door to hackers
Microsoft restricts access to controversial AI facial recognition technology
GitHub’s AI-powered developer assistant Copilot is now available to all programmers
HashiCorp debuts new tool to detect infrastructure configuration drift
CLOUD – BY MIKE WHEATLEY . 2 HOURS AGO
Security issue in smart jacuzzi software exposes user data
SECURITY – BY DUNCAN RILEY . 2 HOURS AGO
CyberProof brings managed EDR capabilities to Microsoft Security Services for Enterprise
SECURITY – BY DUNCAN RILEY . 3 HOURS AGO
‘Icefall’ vulnerabilities in leading operational technology products open door to hackers
SECURITY – BY DUNCAN RILEY . 3 HOURS AGO
Microsoft restricts access to controversial AI facial recognition technology
AI – BY MIKE WHEATLEY . 3 HOURS AGO
GitHub’s AI-powered developer assistant Copilot is now available to all programmers
AI – BY MIKE WHEATLEY . 4 HOURS AGO
Forgot Password?
Like Free Content? Subscribe to follow.

source