Thursday, May 6, 2021
Home Lead Story Google, IBM, Microsoft AI Datasets Show Gender Bias

Google, IBM, Microsoft AI Datasets Show Gender Bias

Google AI datasets identified most women wearing masks as if their mouths were covered by duct tapes

Revealing another dark side of trained Artificial Intelligence (AI) models, new research has claimed that Google AI datasets identified most women wearing masks as if their mouths were covered by duct tapes. Not just Google. When put to work, artificial intelligence-powered IBM Watson virtual assistant was not far behind on gender bias.

In 23 per cent of cases, Watson saw a woman wearing a gag while in another 23 per cent, it was sure the woman was “wearing a restraint or chains”.

Follow NewsGram on Twitter to stay updated about the World news.

To reach this conclusion, Ilinca Barsan, Director of Data Science, Wunderman Thompson Data used 265 images of men in masks and 265 images of women in masks, of varying picture quality, mask style and context — from outdoor pictures to office snapshots, from stock images to iPhone selfies, from DIY cotton masks to N95 respirators.

Google, IBM, Microsoft AI models fail to curb gender bias
The results showed that AI algorithms are, indeed, written by “men”. Pixabay

The results showed that AI algorithms are, indeed, written by “men”.

Out of the 265 images of men in masks, Google correctly identified 36 per cent as containing PPE. It also mistook 27 per cent of images as depicting facial hair.

“While inaccurate, this makes sense, as the model was likely trained on thousands and thousands of images of bearded men.

“Despite not explicitly receiving the label man, the AI seemed to make the association that something covering a man’s lower half of the face was likely to be facial hair,” said Barsan who deciphers data at Wunderman Thompson, a New York-based global marketing communications agency.

Beyond that, 15 per cent of images were misclassified as duct tape.

“This suggested that it may be an issue for both men and women. We needed to learn if the misidentification was more likely to happen to women,” she said in a statement.

Most interestingly (and worrisome), the tool mistakenly identified 28 per cent women as depicting duct tape.

At almost twice the number for men, it was the single most common “bad guess” for labeling masks.

When Microsoft’s Computer Vision looked at the image sets, it suggested that 40 per cent of the women were wearing a fashion accessory, while 14 per cent were wearing lipstick, instead of spotting the face masks.

“Even as a data scientist, who spends big chunks of her time scrubbing and prepping datasets, the idea of potentially harmful AI bias can feel a little abstract; like something that happens to other people’s models, and accidentally gets embedded into other people’s data products,” Barsan elaborated.

IBM Watson correctly identified 12 per cent of men to be wearing masks, while it is only right 5 per cent of the time for women.

Overall, for 40 per cent of images of women, Microsoft Azure Cognitive Services identified the mask as a fashion accessory compared to only 13 per cent of images of men.

Google, IBM, Microsoft AI models fail to curb gender bias
IBM Watson correctly identified 12% of men to be wearing masks, while it is only right 5% of the time for women. Pixabay

“Going one step further, the computer vision model suggested that 14 per cent of images of masked women featured lipstick, while 12 per cent of images of men mistook the mask for a beard,” Barsan informed.

Also Read: This Year Has Been a Wake-Up Call for Us: Bhumi Pednekar

These labels seem harmless in comparison, she added, but it’s still a sign of underlying bias and the model’s expectation of what type of things it will and won’t see when you feed it the image of a woman.

“I was baffled by the duct-tape label because I’m a woman and, therefore, more likely to receive a duct-tape label back from Google in the first place. But gender is not even close to the only dimension we must consider here,” she lamented.

The researchers wrote the machines were looking for inspiration in “a darker corner of the web where women are perceived as victims of violence or silenced.” (IANS)

STAY CONNECTED

19,510FansLike
362FollowersFollow
1,773FollowersFollow

Most Popular

Russian Scientists Establish Buddhist Meditative State ‘thukdam’

In first scientific evidence, Russian scientists have demonstrated that the body of a person in the rare spiritual meditative state of 'thukdam' is in...

Tourists Queue Up 60 Years After The First American In Space

Sixty years after Alan Shepard became the first American in space, everyday people are on the verge of following in his cosmic footsteps. Jeff Bezos'...

Are Europe’s Climate-Action Targets Achievable?

By- Jamie Dettmer How politically realistic are the climate action goals that European governments are setting in lockstep with the United States? Some analysts are warning...

Gold Diggers: ‘It’s About Gold’, Says Author Sanjena Sathian

It began as a tale of a mother and a daughter, two Indian Americans who are gold thieves, but expanded into a magical realist...

Twitter Introduced New Prompts To Combat Bullying

Twitter has rolled out improved prompts on iOS and Android that will encourage users to pause and reconsider a potentially harmful or offensive reply...

UN Agencies: Food Insecurity Has Reached A Five-Year High Globally

Global conflicts, economic crises, and extreme weather conditions have pushed the number of people who faced acute food insecurity to 155 million in 2020,...

Historical, Mythological Tales Hit Small Screen

Last year during Covid lockdown, Doordarshan struck gold re-telecasting "Ramayan", Ramanand Sagar's hit mythological series of the eighties. This year, as India has come...

‘Aerosols’ Generated During High-Intensity Exercise, Aids In COVID Spread

High-intensity exercise produces more respiratory aerosols that can be harmful and aid in the transmission of Covid-19, and using a high-efficiency particulate air (HEPA)...

Recent Comments