Ethical standards for AI: Microsoft cuts features of its facial recognition

Certain functions will only be available to customers after they have been approved by Microsoft. Others will be blocked for almost all users in the future. For existing customers, the new regulation applies from June 30, 2023.

Microsoft has announced that it will remove some features from its artificial intelligence services. The company wants to ensure that its facial recognition technologies meet its own ethical guidelines for artificial intelligence. As of this week, new users will no longer be able to access the features in question – existing users will lose access within a year.

However, new customers can now request access to the facial recognition capabilities of the Azure Face Application Programming Interface, Computer Vision and Video Indexer. Existing customers have one year to apply so they don’t lose access to facial recognition features. The procedure is intended to provide an additional check and adapt the facial recognition services to the company’s Responsible AI standard. From June 30, 2023, only apps tested by Microsoft should be able to use the facial recognition functions.

Some aspects of facial recognition are exempt from the new regulation. These include the features of blur, exposure, glasses, head pose, noise and landmark detection.

Exceptions for people with disabilities

However, features that can detect a person’s mood or characteristics such as gender, age, smile, facial hair, and makeup are no longer available. Microsoft is questioning their use due to privacy concerns. Abuse of these functions is also possible, for example to discriminate against users. They are therefore no longer available to new customers with immediate effect. For existing customers, the rule applies from June 30, 2023. The only exceptions are for integration with services geared towards people with disabilities, such as the company’s Seeing AI app.

In addition, Microsoft refers to tools and resources such as the Fairlearn open source package and the in-house Fairness Dashboard to learn more about the fair use of Azure Cognitive Services. A new programming interface called Recognition Quality is also designed to identify potential issues with using AI to improve the quality of photos that affect certain demographics more than others.

Share:

Facebook
Twitter
LinkedIn

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

On Key

Related Posts

Baywatch Hawaii on TV show

Photo credits: still from the TV series Baywatch Hawaii, James Aylott/Getty Images, CBS, Walt Disney, Netflix, Montage: TVSPIELFILM.de (2), IMAGO / Future Image, ZDF/Frank Hempel,