Microsoft doesn t want AI recognizing your emotions anymore mostly TechRadar

Microsoft doesn t want AI recognizing your emotions anymore mostly TechRadar

Microsoft doesn t want AI recognizing your emotions anymore - mostly TechRadar Skip to main content TechRadar is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Here's why you can trust us. Microsoft doesn t want AI recognizing your emotions anymore - mostly By Cesar Cadenas published 23 June 2022 Concerned over potential misuse and privacy issues (Image credit: ClearCutLtd/Pixabay) Audio player loading… Microsoft is updating its Responsible AI Standard and revealed that it's retiring Azure Face's emotional and facial recognition abilities (for the most part). The Responsible AI Standard (opens in new tab) is Microsoft's internal ruleset when it comes to building AI systems. The company wants AI to be a positive force in the world and to never be misused by bad actors. It's a standard that's never been shared with the public before. However, with this new change, Microsoft decided now would be the time. Emotional and facial recognition software has been controversial, to say the least. There are multiple organizations calling for this technology to be banned. Fight for the Future, for example, wrote an open letter back in May asking Zoom to stop its own development of emotional tracking software and called it "invasive" and " a violation of privacy and human rights." Policy change As it's laid out, Microsoft will rework its Azure Face service to meet the requirements of its new Responsible AI Standard. First, the company is removing public access to the AI's emotion scanning capability. Second, Azure Face will no longer be able to identify a person's facial characteristics, including "gender, age, [a] smile, facial hair, hair, and makeup." The reason for the retirement is because the global science community still doesn't have a clear "consensus on the definition of 'emotions'". Natasha Cramption, Chief Responsible AI Officer at Microsoft, said that experts from inside and outside the company have voiced their concerns. The problem is "the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns…" In addition to Azure Face, Microsoft's Custom Neural Voice will be seeing similar restrictions. Custom Neural Voice (opens in new tab) is a text-to-speech app that is shockingly lifelike. Now the service will be limited to a select few "managed customers and partners," which are people who work directly with Microsoft's account teams. The company states that while the technology has great potential, it may be used to impersonate. In order to keep having access to Neural Voice, all existing customers must submit an intake form and get approved by Microsoft. They have to be approved by June 30, 2023, and if they aren't selected, these customers will no longer have access to Neural Voice. Still in the works Despite everything that's been said, Microsoft isn't totally abandoning its facial recognition tech. The announcement only pertains to public access. Sarah Bird, who is the Principal Group Project Manager at Azure AI, wrote about responsible facial recognition (opens in new tab). And in that post, she states "Microsoft recognizes these capabilities can be valuable when used for a set of controlled accessibility scenarios." One of these scenarios, according to a representative, is Seeing AI (opens in new tab)which is an iOS app that helps the visually impaired with identifying people and objects around them. It's good to see another tech giant recognizing the problems with facial recognition and the potential for abuse. IBM did something similar back in 2020, although its approach was more absolute. Back in 2020, IBM announced it was abandoning work on facial recognition because the company was afraid it could be misused for mass surveillance. Seeing these two titans of the industry get rid of this tech is a win for anti-facial recognition critics. If you're interested in learning more about AI, TechRadar recently published a piece on what it can do for cybersecurity. Cesar CadenasContributorCesar Cadenas has been writing about the tech industry for several years now specializing in consumer electronics, entertainment devices, Windows, and the gaming industry. But he's also passionate about smartphones, GPUs, and cybersecurity. See more Software news Are you a pro? Subscribe to our newsletter Sign up to theTechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Thank you for signing up to TechRadar. You will receive a verification email shortly. There was a problem. Please refresh the page and try again. MOST POPULARMOST SHARED1My days as a helpful meat shield are over, thanks to the Killer Klown horror game2One of the world's most popular programming languages is coming to Linux3It looks like Fallout's spiritual successor is getting a PS5 remaster4I tried the weirdest-looking Bluetooth speaker in the world, and I utterly adore it5You may not have to sell a body part to afford the Nvidia RTX 4090 after all1We finally know what 'Wi-Fi' stands for - and it's not what you think2Best laptops for designers and coders 3Miofive 4K Dash Cam review4Logitech's latest webcam and headset want to relieve your work day frustrations5Best offers on Laptops for Education – this festive season Technology Magazines (opens in new tab)● (opens in new tab)The best tech tutorials and in-depth reviewsFrom$12.99 (opens in new tab)View (opens in new tab)
Share:
0 comments

Comments (0)

Leave a Comment

Minimum 10 characters required

* All fields are required. Comments are moderated before appearing.

No comments yet. Be the first to comment!