Monday December 18, 2017
Home World ‘Killer...

‘Killer robots with AI should be banned’

0
42

dnews-files-2013-05-robotic-sentry-mdars-jpg

By NewsGram Staff Writer

Addressing concerns regarding start of a “military arms race”, more than 1,000 robotics experts and artificial intelligence (AI) researchers, spanning physicist Stephen Hawking, technologist Elon Musk, and philosopher Noam Chomsky have signed an open letter calling for the ban of offensive autonomous weapons, better known as “killer robots”.

Apart from hundreds of AI and robotics researchers from top-flight universities and laboratories, the signatories of the letter include Apple co-founder Steve Wozniak.

“AI technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades”, says the letter put together by the Future of Life Institute, a group that works to mitigate “existential risks facing humanity”.

Autonomous weapons “have been described as the third revolution in warfare, after gunpowder and nuclear arms”, the letter further adds.

The weapons include armed drones that can search for and kill certain people based on their programming.

xlarge

Warning against the pitfalls of AI, the letter says that despite the institute seeing the “great potential [of AI] to benefit humanity in many ways”, the development of robotic weapons would prove useful to terrorists, brutal dictators, and those wishing to perpetrate ethnic cleansing.

As such the weapons do not yet truly exist, but the technology that would allow them to be used is under works.

By eliminating the risk of human deaths, robotic weapons would lower the threshold for going to war thereby making wars potentially more common, the signatories to the letter believe.

By building robotic weapons, the letter warns that a public backlash could grow and curtail the genuine benefits of AI.

Working to pre-emptively ban robotic weapons, the Campaign to Stop Killer Robots, a group formed in 2012 by a list of NGOs including Human Rights Watch, is trying to get the Convention of Conventional Weapons to set up a group of governmental experts which would look into the issue.

DSC_0135

The Convention of Conventional Weapons in Geneva is a UN-linked group that seeks to prohibit the use of certain conventional weapons such as landmines and laser weapons which were pre-emptively banned in 1995.

Meanwhile, the UK has opposed a ban on killer robots at a UN conference, saying that it “does  not see the need for a prohibition” of autonomous weapons.

South Korea has unveiled similar weapons; armed sentry robots whose cameras and heat sensors allow detection and tracking of humans automatically, although the machines require a human operator to fire the weapons.

Next Story

Stephen Hawking believes Technology could end Poverty and Disease, says Artificial Intelligence could be the Worst or Best things for Humanity

Hawking said everyone has a role to play in making sure that this generation and the next are fully engaged with the study of science at an early level to create “a better world for the whole human race.”

0
80
Stephen Hawking
Cosmologist Stephen Hawking delivers a video message during the inauguration of Web Summit, Europe's biggest tech conference, in Lisbon, Portugal, Nov. 6, 2017. (VOA)

Lisbon, November 7, 2017 : Technology can hopefully reverse some of the harm caused to the planet by industrialization and help end disease and poverty, but artificial intelligence (AI) needs to be controlled, physicist Stephen Hawking said on Monday.

Hawking, a British cosmologist who was diagnosed with motor neuron disease aged 21, said technology could transform every aspect of life but cautioned that artificial intelligence poses new challenges.

He said artificial intelligence and robots are already threatening millions of jobs — but this new revolution could be used to help society and for the good of the world such as alleviating poverty and disease.

“The rise of AI could be the worst or the best thing that has happened for humanity,” Stephen Hawking said via telepresence at opening night of the 2017 Web Summit in Lisbon that is attended by about 60,000 people.

“We simply need to be aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance.”

Hawking’s comments come during an escalating debate about the pro and cons of artificial intelligence, a term used to describe machines with a computer code that learns as it goes.

ALSO READ Humanity’s days are numbered, Artificial Intelligence (AI) will cause mass extinction, warns Stephen Hawking

Silicon Valley entrepreneur Elon Musk, who is chief executive of electric car maker Tesla Inc and rocket company SpaceX, has warned that AI is a threat to humankind’s existence.

But Microsoft co-founder Bill Gates, in a rare interview recently, told the WSJ Magazine that there was nothing to panic about.

Stephen Hawking said everyone has a role to play in making sure that this generation and the next are fully engaged with the study of science at an early level to create “a better world for the whole human race.”

ALSO READ Indian Origin Scientist Part of the team that Developed Nanotechnology-based Test that quickly Detects Zika Virus

“We need to take learning beyond a theoretical discussion of how AI should be, and take action to make sure we plan for how it can be,” said Stephen Hawking, who communicates via a cheek muscle linked to a sensor and computerized voice system.

“You all have the potential to push the boundaries of what is accepted, or expected, and to think big. We stand on the threshold of a brave new world. It is an exciting — if precarious — place to be and you are the pioneers,” he said. (VOA)

Next Story

Elon Musk Unveils Plans to put Humans on Mars by 2024

0
27
Elon Musk
Elon Musk, founder, CEO and lead designer at SpaceX. voa

Adelaide, Sep 29: Elon Musk, founder of SpaceX on Friday unveiled its plans to put humans on Mars as early as 2024.

Speaking on the final day of the 68th International Astronautical Congress (IAC) here, SpaceX CEO Elon Musk made the announcement of the plans, reports Xinhua news agency.

Musk, who also serves as the CEO of automotive company Tesla, said SpaceX was aiming for cargo missions to the Red Planet in 2022 and crew with cargo by 2024.

He said that missions to Mars would be launched every two years from 2022 onwards with colonisation and terraforming to begin as soon as the first humans arrive in order to make it “a really nice place to be”.

“It’s about believing in the future, and thinking that the future will be better than the past,” Musk said.

SpaceX also announced its new BFR rocket on Friday.

“I can’t emphasise enough how profound this is, and how important this is,” Musk told the Congress as the keynote speaker on the final day.

The new BFR has the highest capacity payload of any rocket ever built, meaning it has the lowest launch cost, due to its status as a fully reusable rocket while also being the most powerful.

“It’s really crazy that we build these sophisticated rockets and then crash them every time we fire,” Musk said.

He said that the new BFR could carry a 40-carriage spaceship to Mars with two or three people occupying each carriage.

The rocket is capable of flying from Earth to the Moon and back without refuelling, making creating a base on the Moon, dubbed Moon Base Alpha, achievable in near future.

SpaceX intends for the new, scaled-down BFR to replace its other flagship rockets, the Dragon, Falcon 9 and Falcon Heavy.

Musk said the BFR could even be used for international flights on Earth, promising to cut most long-distance Earth flights to just half an hour.

He said the rocket could travel from New York City to Shanghai in 37 minutes at a maximum speed of 18,000 miles (28,968 km) per hour.

Funding for BFR development will come from SpaceX’s satellite and International Space Station (ISS) revenue.

SpaceX’s announcement came hours after Lockheed Martin revealed new technology that would see it land on Mars in partnership with NASA by 2030.

SpaceX estimated this year that a permanent, self-sustaining colony on Mars was 50 to 100 years away.(IANS)

Next Story

Facebook’s Artificial Intelligence Shut Down After It Creates Its Own Language

Mark Zuckerberg doesn't understand Artifical Intelligence, says Tech Business Magnet, Elon Musk

0
112
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay
  • Facebook shuts down AI program after the robot starts developing its own language to decide conclusion of the task
  • Researchers identify this when they find two bots in the lab having a seemingly gibberish exchange, which actually has semantic meanings
  • Tech business magnet, Elon Musk says that The Facebook CEO has a limited understanding of AI technology

New Delhi, August 2, 2017: Researchers at Facebook had to shut down the Artificial Intelligence program after the robot started to create its own language. It developed a system of code words to make the communication more efficient.

According to the Digital Media report, this one incident at Facebook is not first of its kind to have happened while working on AI programs, even in the past, an AI robot has diverged from its training to communicate in English and developed its own language. To a common man, that language may come off as “gibberish” but they contain semantic meaning when deciphered by experts and the AI ‘agents’.

[sociallocker][/sociallocker]

The researchers at Facebook noticed that their AI bot had given up on English and the new language it created was capable of communicating with other AI bots and deciding the future course of action as well. The language which at first appeared unintelligent to the researchers actually represented a task at hand, and a possible conclusion on how to proceed. They noticed this when two bots in the lab began to exchange with each other.

AI developers at other companies have observed a similar use of “shorthands” to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.

At Google, the team working on the Translate service discovered that the AI they programmed had silently written its own language to aid in translating sentences.

Also Read: BENEFITS & RISKS OF ARTIFICIAL INTELLIGENCE

The Translate developers had added a neural network to the system, making it capable of translating between language pairs it had never been taught. The new language the AI silently wrote was a surprise.

This incident with Facebook’s AI failure made more news when Elon Musk, the founder of OpenAI made remarks on Zuckerberg’s AI faux pas. In an altercation at twitter, Elon said in one of his tweets that, Facebook CEO doesn’t have much understanding of the AI technology.

[bctt tweet=”Facebook’s AI creates own language; Forces Shutdown” username=”NG News Desk”]

To which Mark Zuckerberg in a Live Q&A session responded, “Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used”.

There is not enough evidence to claim that these unforeseen AI divergences are a threat or that they could lead to machines taking over operators. They do make development more difficult, however, because people are unable to grasp the overwhelmingly logical nature of the new languages. In Google’s case, for example, the AI had developed a language that no human could grasp but was potentially the most efficient known solution to the problem.

Prepared by Nivedita Motwani. Twitter @Mind_Makeup


NewsGram is a Chicago-based non-profit media organization. We depend upon support from our readers to maintain our objective reporting. Show your support by Donating to NewsGram. Donations to NewsGram are tax-exempt.

Click Here: www.newsgram.com/donate