Wednesday October 23, 2019
Home Lead Story Artificial In...

Artificial Intelligence Will Match Humans By 2062: Experts

Untangling the ethics of machine accountability will be the second fundamental shift in the world as we know it

0
//
The enterprise solutions major has integrated SAP CoPilot with the
"A tectonic shift is happening in AI. Nearly 85 per cent of enterprises globally will use AI in some form or the other by 2020.

In less than 50 years, Artificial Intelligence (AI) will match humans on traits like adaptability, creativity and emotional intelligence, an expert has predicted.

Speaking at the “Festival of Dangerous Ideas” at University of New South Wales in Sydney on Sunday, Professor Toby Walsh said AI will match human intelligence by 2062.

“Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW Sydney, has put a date on this looming reality.

“He considers 2062 the year that artificial intelligence will match human intelligence, although a fundamental shift has already occurred in the world as we know it,” the university said in a statement.

Walsh argued that we are already experiencing the risks of AI that seem to be so far in the future.

“Even without machines that are very smart, I’m starting to get a little bit nervous about where it’s going and the important choices we should be making”, said Walsh who has written a book “2062: The World that AI Made”.

The key challenge, according to him, will be to avoid the apocalyptic rhetoric of AI and to determine how to move forward in the new age of information.

Privacy concerns about the collection of personal data is nothing new.

Robots
Walsh argued that we are already experiencing the risks of AI that seem to be so far in the future. Pixabay

Citing the Cambridge Analytica scandal, Walsh argues that we should be more sceptical about how data is misused by tech companies.

“A lot of the debate has focused on how personal information was stolen from people, and we should be rightly outraged by that,” Walsh told the audience.

“Many of us have smartwatches that are monitoring our vital signs; our blood pressure, our heartbeat, and if you look at the terms of service, you don’t own that data,” Walsh explained.

“You can lie about your digital preferences, but you can’t lie about your heartbeat,” he noted.

Also Read- Intel Brings New Processor For Entry-Level Servers

Untangling the ethics of machine accountability will be the second fundamental shift in the world as we know it.

“Fully autonomous machines will radically change the nature of warfare and will be the third revolution in warfare,” Walsh said.

Walsh believes the issue is creating machines that are aligned with human values, which is currently a problem on other platforms driven by Artificial Intelligence. (IANS)

Next Story

Social Robots Can Now be Conflict Mediators: Study

The study also found that the teams did respond socially to the virtual agent during the planning of the mission they were assigned (nodding, smiling and recognising the virtual agent's input by thanking it) but the longer the exercise progressed, their engagement with the virtual agent decreased

0
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay

We may listen to facts from Siri or Alexa, or directions from Google Maps, but would we let a virtual agent enabled by artificial intelligence help mediate conflict among team members? A new study says they might help.

The study was presented at the 28th IEEE International Conference on Robot & Human Interactive Communication in the national capital on Tuesday.

“Our results show that virtual agents and potentially social robots might be a good conflict mediator in all kinds of teams. It will be very interesting to find out the interventions and social responses to ultimately seamlessly integrate virtual agents in human teams to make them perform better,” said study lead author Kerstin Haring, Assistant Professor at the University of Denver.

Researchers from the University of Southern California (USC) and the University of Denver created a simulation in which a three-person team was supported by a virtual agent ‘Avatar’ on screen in a mission that was designed to ensure failure and elicit conflict.

The study was designed to look at virtual agents as potential mediators to improve team collaboration during conflict mediation.

AI
“We’re beginning to see the first instances of artificial intelligence operating as a mediator between humans, but it’s a question of: ‘Do people want that?” Pixabay

While some of the researchers had previously found that one-on-one human interactions with a virtual agent therapist yielded more confessions, in this study, team members were less likely to engage with a male virtual agent named ‘Chris’ when conflict arose.

Participating members of the team did not physically accost the device, but rather were less engaged and less likely to listen to the virtual agent’s input once failure ensued among team members.

Also Read: Uber Joins Hands with DocsApp to Avail Free Medical Consultations for its Drivers

The study was conducted in a military academy environment in which 27 scenarios were engineered to test how the team that included a virtual agent would react to failure and the ensuing conflict.

The virtual agent was not ignored by any means.

The study also found that the teams did respond socially to the virtual agent during the planning of the mission they were assigned (nodding, smiling and recognising the virtual agent’s input by thanking it) but the longer the exercise progressed, their engagement with the virtual agent decreased. (IANS)