Monday December 9, 2019
Home India Media Coverag...

Media Coverage of Facebook’s Artificial Intelligence (AI) Malfunction Irresponsible: Indian-origin Researcher

It was widely reported that the social media giant had to pull the plug on the AI system its researchers were working on "because things got out of hand"

0
//
Facebook AI controversy
Facebook’s Artificial Intelligence Shut Down After It Creates Its Own Language. Wikimedia
  • An Indian-origin researcher who is part of Facebook’s Artificial Intelligence Research (FAIR) has said the media coverage was “clickbaity”
  • Dhruv Batra Blamed media for being “irresponsible” in its coverage on Facebook shutting down one of its AI systems after chatbots started communicating in their own language
  • He mentioned while the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI

San Francisco, August 2, 2017: Blaming media for being “irresponsible” in its coverage on Facebook shutting down one of its AI systems after chatbots started communicating in their own language, an Indian-origin researcher who is part of Facebook’s Artificial Intelligence Research (FAIR) has said such coverage was “clickbaity”.

Dhruv Batra, who works as research scientist at FAIR, wrote on his Facebook page that while the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.

“Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximise reward. Analysing the reward function and changing the parameters of an experiment is NOT the same as ‘unplugging’ or ‘shutting down AI’,” Batra said in the post late on Tuesday.

“If that were the case, every AI researcher has been ‘shutting down AI’ every time they kill a job on a machine,” he added.

It was widely reported that the social media giant had to pull the plug on the AI system its researchers were working on “because things got out of hand”.

“The AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created,” media reports said.

ALSO READFacebook’s Artificial Intelligence Shut Down After It Creates Its Own Language

Initially, the AI agents used English to communicate with each other but they later created a new language that only AI systems could understand, thus, defying their purpose.

This reportedly led Facebook researchers to shut down the AI systems and then force them to speak to each other only in English.

“I do not want to link to specific articles or provide specific responses for fear of continuing this cycle of quotes taken out of context, but I find such coverage clickbaity and irresponsible,” Batra posted.

In June, researchers from FAIR found that while they were busy trying to improve chatbots, the “dialogue agents” were creating their own language.

Soon, the bots began to deviate from the scripted norms and started communicating in an entirely new language which they created without human input, media reports said. (IANS)

Next Story

Chatbots Are More Sucessful Than Humans for Certain Interactions

In the study published in Nature Machine Intelligence, the team asked almost 700 participants in an online cooperation game to interact with a human or an artificial partner

0
Chatbots
A previous research has shown that humans prefer not to cooperate with intelligent Chatbots. Pixabay

As we embrace Alexa or Siri in our lives, researchers report that Chatbots are more successful than humans in certain human-machine interactions — but only if they are allowed to hide their non-human identity.

The artificial voices of Siri, Alexa or Google, and their often awkward responses, leave no room for doubt that we are not talking to a real person.

An international team, including Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin, sought to find out whether cooperation between humans and machines is different if the machine purports to be human.

In the study published in Nature Machine Intelligence, the team asked almost 700 participants in an online cooperation game to interact with a human or an artificial partner.

In the game, known as the Prisoner’s Dilemma, players can either act egotistically to exploit the other player, or act cooperatively with advantages for both sides.

The findings showed that bots impersonating humans were more successful in convincing their gaming partners to cooperate.

Chatbots
As we embrace Alexa or Siri in our lives, researchers report that Chatbots are more successful than humans in certain human-machine interactions — but only if they are allowed to hide their non-human identity. Pixabay

As soon as they divulged their true identity, however, cooperation rates decreased.

“Translating this to a more realistic scenario could mean that help desks run by bots, for example, may be able to provide assistance more rapidly and efficiently if they are allowed to masquerade as humans,” the researchers wrote.

The society will have to negotiate the distinctions between the cases of human-machine interaction that require transparency and those where efficiency is key.

ALSO READ: Income Tax Officers Quit Work For Mental Peace

A previous research has shown that humans prefer not to cooperate with intelligent bots. (IANS)