Monday, January 25, 2021
Home Lead Story Google, IBM, Microsoft AI Datasets Show Gender Bias

Google, IBM, Microsoft AI Datasets Show Gender Bias

Google AI datasets identified most women wearing masks as if their mouths were covered by duct tapes

Revealing another dark side of trained Artificial Intelligence (AI) models, new research has claimed that Google AI datasets identified most women wearing masks as if their mouths were covered by duct tapes. Not just Google. When put to work, artificial intelligence-powered IBM Watson virtual assistant was not far behind on gender bias.

In 23 per cent of cases, Watson saw a woman wearing a gag while in another 23 per cent, it was sure the woman was “wearing a restraint or chains”.

Follow NewsGram on Twitter to stay updated about the World news.

To reach this conclusion, Ilinca Barsan, Director of Data Science, Wunderman Thompson Data used 265 images of men in masks and 265 images of women in masks, of varying picture quality, mask style and context — from outdoor pictures to office snapshots, from stock images to iPhone selfies, from DIY cotton masks to N95 respirators.

Google, IBM, Microsoft AI models fail to curb gender bias
The results showed that AI algorithms are, indeed, written by “men”. Pixabay

The results showed that AI algorithms are, indeed, written by “men”.

Out of the 265 images of men in masks, Google correctly identified 36 per cent as containing PPE. It also mistook 27 per cent of images as depicting facial hair.

“While inaccurate, this makes sense, as the model was likely trained on thousands and thousands of images of bearded men.

“Despite not explicitly receiving the label man, the AI seemed to make the association that something covering a man’s lower half of the face was likely to be facial hair,” said Barsan who deciphers data at Wunderman Thompson, a New York-based global marketing communications agency.

Beyond that, 15 per cent of images were misclassified as duct tape.

“This suggested that it may be an issue for both men and women. We needed to learn if the misidentification was more likely to happen to women,” she said in a statement.

Most interestingly (and worrisome), the tool mistakenly identified 28 per cent women as depicting duct tape.

At almost twice the number for men, it was the single most common “bad guess” for labeling masks.

When Microsoft’s Computer Vision looked at the image sets, it suggested that 40 per cent of the women were wearing a fashion accessory, while 14 per cent were wearing lipstick, instead of spotting the face masks.

“Even as a data scientist, who spends big chunks of her time scrubbing and prepping datasets, the idea of potentially harmful AI bias can feel a little abstract; like something that happens to other people’s models, and accidentally gets embedded into other people’s data products,” Barsan elaborated.

IBM Watson correctly identified 12 per cent of men to be wearing masks, while it is only right 5 per cent of the time for women.

Overall, for 40 per cent of images of women, Microsoft Azure Cognitive Services identified the mask as a fashion accessory compared to only 13 per cent of images of men.

Google, IBM, Microsoft AI models fail to curb gender bias
IBM Watson correctly identified 12% of men to be wearing masks, while it is only right 5% of the time for women. Pixabay

“Going one step further, the computer vision model suggested that 14 per cent of images of masked women featured lipstick, while 12 per cent of images of men mistook the mask for a beard,” Barsan informed.

Also Read: This Year Has Been a Wake-Up Call for Us: Bhumi Pednekar

These labels seem harmless in comparison, she added, but it’s still a sign of underlying bias and the model’s expectation of what type of things it will and won’t see when you feed it the image of a woman.

“I was baffled by the duct-tape label because I’m a woman and, therefore, more likely to receive a duct-tape label back from Google in the first place. But gender is not even close to the only dimension we must consider here,” she lamented.

The researchers wrote the machines were looking for inspiration in “a darker corner of the web where women are perceived as victims of violence or silenced.” (IANS)



Most Popular

Telangana Boy Develops Smart Wristband Aiding Alzheimer’s

A 13-year-old boy from Telangana, who developed a smart wristband to monitor Alzheimer's patients, is one of the recipients of Pradhan Mantri Rashtriya Bal...

Diwali 2022 Opens Doors To An Extended Temple In Dubai

A new Hindu temple in Dubai, the foundation stone of which was laid last August amidst the pandemic, will open its doors to worshippers...

Changing Mood In Ayodhya Chronicled In New Book

Fear, apprehension and suspicion has finally been replaced with a sense of ease and excitement and Ayodhya, after the Supreme Court verdict - is...

Scientists Ask For Strict Measures To Restrict New Covid-19 Strain

With new Covid-19 strains reported from several parts of the world giving fresh threats to the fight against the pandemic, a group of scientists...

Pandemic: An Eye-Opener For Health Sector

The Covid-19 pandemic should serve as an eye-opener for the government to increase allocation for the health sector in the Union Budget 2020-21, feel...

Tri-colour, The Colour Code For Republic Day

Republic day is around the corner and so is the end of winter fashion. So why not end it in style? Here are some...

Youth Must Be Made Aware Of Voter Registration

On the occasion of 11th National Voters Day (NVD), Prime Minister Narendra Modi on Monday emphasized the need to spread awareness and ensure voter...

Exercising Muscle May Ward Off Chronic Inflammation On Its Own

Human muscle has an innate ability to ward off the damaging effects of chronic inflammation when exercised, a new study suggests. The study, published...

Recent Comments