Using facial recognition technology to study people’s moods and map identifying characteristics is going a step too far for Microsoft. The company is concerned about the privacy and misuse of this technology. That is why the American hardware and software company will immediately stop using facial recognition technology that is used for these purposes.
Microsoft writes that in a blog.
Microsoft imposes stricter requirements on the use of facial recognition
Microsoft knows that facial recognition brings benefits for law enforcement and investigative authorities. At the same time, the company recognizes that there are also dangers and risks involved. To deal with this technology in a responsible manner, Microsoft has drawn up a number of rules in an internal document. It lists various security and precautionary measures so that the technology is used in a sensible and well-considered manner.
Microsoft regularly updates these guidelines. Employees recently reviewed the rules and tightened the requirements even further. For example, new customers must now request access to use facial recognition technology in Azure Face API, Computer Vision, and Video Indexer. Existing customers have one year to obtain this approval. From June 2023, they will no longer be allowed to use Microsoft’s facial recognition technology if the application is not approved.
“By introducing restricted access, we add an extra layer of protection to the use and deployment of facial recognition to ensure that use of these services complies with Microsoft’s Responsible AI Standard and contributes to high-quality end-users and societal benefits,” it writes. the tech giant in its blog.
Microsoft is concerned about privacy and discrimination
In addition to a stricter admission policy, Microsoft also imposes a restriction on itself. Facial recognition technology used to identify people’s emotional state and identity features will be banned immediately. With this technology it is possible to see if someone is happy, angry or sad. You can also map out personal characteristics, such as gender, use of make-up, facial hair and the presence or absence of tattoos.
Microsoft believes that such applications are currently going a bridge too far. “We have worked with internal and external researchers to understand the limitations and potential benefits of this technology and make the tradeoffs,” the company writes. It shows that experts are concerned about our privacy .
They also believe that it is impossible to demonstrate a relationship between facial expression and demographics. Finally, there is the fear that access to such privacy-sensitive data can be misused for, among other things, discrimination and exclusion of people from services.
New customers will immediately be unable to use facial recognition technology to map people’s moods and personal characteristics. Existing customers have until June 30, 2023 to stop using these attributes.
Intense discussion about the use of facial recognition technology
The use of facial recognition technology has been a heated discussion in the US for some time. In June, IBM CEO Arvid Krishna announced he would stop researching, developing and offering products and services featuring facial recognition technology. He feared that this technique could be used for mass surveillance, ethnic profiling and promoting discrimination and racial inequality.
For the same reasons, Amazon and Microsoft also stopped selling facial recognition technology at the time. They are waiting for politicians in Washington to introduce legislation that provides more clarity about the application possibilities.
EDPB doesn’t like facial recognition in public spaces
Also here in Europe, there is a heated discussion about the use of smart cameras with facial recognition technology. The European Data Protection Board (EDPB) considers this a step towards the establishment of a mass surveillance society. “A society in which you will never be forced to walk down the street again. In which you cannot escape the prying eyes of systems that are watching you and recognizing you, so that they can follow you all day long. You step out the door and know: I am constantly being watched. That is oppressive, makes people feel less free to be themselves,” explained Aleid Wolfsen, deputy chairman of the EDPB.
To ensure that the use of facial recognition is necessary and proportionate, the European regulator launched new guidelines in May. If it is up to the EDPB, facial recognition in public spaces will be banned. The same goes for technology that categorizes people based on ethnicity, gender, sexuality or political preference. Building up databases of photos collected from public sources is also a no-go.
Finally, the regulator is calling for a ban on technology that recognizes emotions. In their own words, this technique only has drawbacks. “Think of employers who give employees a bad rating if they look ‘angry’ too often while sitting in front of their computers, employees who have to smile to gain access to the workplace, or police who stop people who, according to the system, as a precautionary measure. angry’ or ‘dissatisfied’ while walking on the street”, says Wolfsen.
Catch up on more articles here
Follow us on Twitter here