Saturday February 16, 2019
Home Lead Story Ex-Google Chi...

Ex-Google Chief: Elon Musk ‘exactly wrong’ on AI

Musk has always been a critic of AI and asked for stiff regulations to curb the technology

0
//
Elon Musk, spacex
Musk quotes $1 bn to build tunnel under Australian mountain range. IANS

Tesla and SpaceX Founder Elon Musk’s skepticism about Artificial Intelligence (AI) and its impact on human beings is “exactly wrong,” former Google CEO Eric Schmidt has said.

Musk thinks that AI is bad for humanity and may spark World War III.

“I think Elon is exactly wrong” about AI, Schmidt said during the “VivaTech” conference in Paris on Friday.

“Musk is concerned about the possible misuse of this technology and I am too but I am more convinced by the overwhelming benefit of AI,” tech website CNET quoted Schmidt as saying.

“AI will make people smarter and this will be a net gain,” said Schmidt who is currently a board member of Alphabet, Google’s parent company.

Earlier, during the same event, Facebook CEO Mark Zuckerberg – who has been in verbal spat with Musk over AI for long — expressed optimism about the possibilities of AI.

Representational image (AI)
Representational image (AI). Pixabay

“I think that AI is going to unlock a huge amount of positive things, whether that’s helping to identify and cure diseases, to help cars drive more safely, to help keep our communities safe,” he was quoted as saying.

Mush recently warned that if not regulated or controlled soon, AI will become an “immortal dictator” and there will be no escape for humans.

“At least when there’s an evil dictator, that human is going to die. But for an AI there would be no death. It would live forever, and then you’d have an immortal dictator, from which we could never escape,” he said in a new documentary titled “Do You Trust This Computer?”

Musk has always been a critic of AI and asked for stiff regulations to curb the technology.

In a recent tweet, Musk said that people should be more concerned with AI than the risk posed by North Korea.

“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,” Musk tweeted.

Also Read: Elon Musk Unveils Plan to put Humans on Mars by 2024

Musk has also quit the board of OpenAI, a non-profit AI research company he co-founded that aims to promote and develop friendly AI that benefits the humanity.

In a recent public spat with Zuckerberg, Musk said: “I’ve talked to Mark about this (AI). His understanding of the subject is limited”.

Zuckerberg replied: “I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.” (IANS)

Next Story

Musk-founded AI Group Not to Release Software on ‘Fake News’ Fears

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies

0
Tesla CEO Elon musk, board
Tesla CEO Elon Musk. (VOA)

Elon Musk-founded non-profit Artificial Intelligence (AI) research group OpenAI has decided not to reveal its new AI software in detail, fearing the AI-based model can be misused by bad actors in creating real-looking fake news.

Dubbed as “GPT2”, the AI-based automated text generator can produce fake news articles and abusive posts after being fed with a few pieces of data.

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text and performs rudimentary reading comprehension, machine translation, question answering and summarization – “all without task-specific training,” OpenAI said in a blog post late on Thursday.

Trained on a data set of eight million web pages, “GPT2” can adapt to the style and the content of the text you feed it.

OpenAI said the AI model is so good and the risk of malicious use is so high that it is not releasing the full research to the public.

However, the non-profit has created a smaller model that lets researchers experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

Elon Musk, CEO of SpaceX. Wikimedia Commons

“We can imagine the application of these models for malicious purposes, including the following: Generate misleading news articles, impersonate others online, automate the production of abusive or faked content to post on social media and automate the production of spam/phishing content,” said OpenAI.

Today, malicious actors – some of which are political in nature – have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.

OpenAI further said that we should consider how research into the generation of synthetic images, videos, audio and text may further combine to unlock new as-yet-unanticipated capabilities for these bad actors.

Also Read- Adults With Obstructive Sleep At Greater Risk Of Cardiovascular Diseases

Musk, who is the staunch critic of AI and co-founded OpenAI in 2016, stepped down from its board in 2018.

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies. (IANS)