Wednesday February 20, 2019
Home Lead Story Humanity&#821...

Humanity’s days are numbered, Artificial Intelligence (AI) will cause mass extinction, warns Stephen Hawking

0
//
Scientist Stephen Hawking giving his views on the danger of Artificial Intelligence (AI)
Scientist Stephen Hawking giving his views on the danger of Artificial Intelligence (AI)

London, Nov 3: Earth is becoming too small and humanity is bound to self-destruct, with Artificial Intelligence (AI) replacing us as the dominant being on the planet, according to scientist Stephen Hawking.

Professor Hawking says that our time on Earth is numbered after we passed the point of “no return”.

The theoretical physicist says that developments in AI have been so great that the machines will one day be more dominant than human beings, express.co.uk reported.

He told Wired Magazine: “I fear that Artificial Intelligence (AI) may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself.

“This will be a new form of life that outperforms humans.”

Hawking, 75, said that humans need to leave Earth if we are to continue as a species.

He said a new space programme should be humanity’s top priority “with a view to eventually colonising suitable planets for human habitation”.

This will allow us to leave Earth and colonise another planet to ensure our survival, otherwise there will be “serious consequences”.

Professor Hawking added: “I believe we have reached the point of no return. Our earth is becoming too small for us, global population is increasing at an alarming rate and we are in danger of self-destructing.”

Last year, at the opening of Cambridge University’s artificial intelligence centre, Professor Hawking said that AI could either be the best or worst invention humanity has ever made.

“This will be a new form of life that outperforms humans.”

“The potential benefits of creating intelligence are huge. We cannot predict what we might achieve, when our own minds are amplified by Artificial Intelligence (AI).

“Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialisation.

“And surely we will aim to finally eradicate disease and poverty. Every aspect of our lives will be transformed, In short, success in creating AI, could be the biggest event in the history of our civilisation.

“But it could also be the last, unless we learn how to avoid the risks. Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.” (IANS)

Next Story

Musk-founded AI Group Not to Release Software on ‘Fake News’ Fears

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies

0
Tesla CEO Elon musk, board
Tesla CEO Elon Musk. (VOA)

Elon Musk-founded non-profit Artificial Intelligence (AI) research group OpenAI has decided not to reveal its new AI software in detail, fearing the AI-based model can be misused by bad actors in creating real-looking fake news.

Dubbed as “GPT2”, the AI-based automated text generator can produce fake news articles and abusive posts after being fed with a few pieces of data.

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text and performs rudimentary reading comprehension, machine translation, question answering and summarization – “all without task-specific training,” OpenAI said in a blog post late on Thursday.

Trained on a data set of eight million web pages, “GPT2” can adapt to the style and the content of the text you feed it.

OpenAI said the AI model is so good and the risk of malicious use is so high that it is not releasing the full research to the public.

However, the non-profit has created a smaller model that lets researchers experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

Elon Musk, CEO of SpaceX. Wikimedia Commons

“We can imagine the application of these models for malicious purposes, including the following: Generate misleading news articles, impersonate others online, automate the production of abusive or faked content to post on social media and automate the production of spam/phishing content,” said OpenAI.

Today, malicious actors – some of which are political in nature – have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.

OpenAI further said that we should consider how research into the generation of synthetic images, videos, audio and text may further combine to unlock new as-yet-unanticipated capabilities for these bad actors.

Also Read- Adults With Obstructive Sleep At Greater Risk Of Cardiovascular Diseases

Musk, who is the staunch critic of AI and co-founded OpenAI in 2016, stepped down from its board in 2018.

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies. (IANS)