Sunday October 22, 2017
Home World ‘Killer...

‘Killer robots with AI should be banned’

0
36

dnews-files-2013-05-robotic-sentry-mdars-jpg

By NewsGram Staff Writer

Addressing concerns regarding start of a “military arms race”, more than 1,000 robotics experts and artificial intelligence (AI) researchers, spanning physicist Stephen Hawking, technologist Elon Musk, and philosopher Noam Chomsky have signed an open letter calling for the ban of offensive autonomous weapons, better known as “killer robots”.

Apart from hundreds of AI and robotics researchers from top-flight universities and laboratories, the signatories of the letter include Apple co-founder Steve Wozniak.

“AI technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades”, says the letter put together by the Future of Life Institute, a group that works to mitigate “existential risks facing humanity”.

Autonomous weapons “have been described as the third revolution in warfare, after gunpowder and nuclear arms”, the letter further adds.

The weapons include armed drones that can search for and kill certain people based on their programming.

xlarge

Warning against the pitfalls of AI, the letter says that despite the institute seeing the “great potential [of AI] to benefit humanity in many ways”, the development of robotic weapons would prove useful to terrorists, brutal dictators, and those wishing to perpetrate ethnic cleansing.

As such the weapons do not yet truly exist, but the technology that would allow them to be used is under works.

By eliminating the risk of human deaths, robotic weapons would lower the threshold for going to war thereby making wars potentially more common, the signatories to the letter believe.

By building robotic weapons, the letter warns that a public backlash could grow and curtail the genuine benefits of AI.

Working to pre-emptively ban robotic weapons, the Campaign to Stop Killer Robots, a group formed in 2012 by a list of NGOs including Human Rights Watch, is trying to get the Convention of Conventional Weapons to set up a group of governmental experts which would look into the issue.

DSC_0135

The Convention of Conventional Weapons in Geneva is a UN-linked group that seeks to prohibit the use of certain conventional weapons such as landmines and laser weapons which were pre-emptively banned in 1995.

Meanwhile, the UK has opposed a ban on killer robots at a UN conference, saying that it “does  not see the need for a prohibition” of autonomous weapons.

South Korea has unveiled similar weapons; armed sentry robots whose cameras and heat sensors allow detection and tracking of humans automatically, although the machines require a human operator to fire the weapons.

Next Story

Elon Musk Unveils Plans to put Humans on Mars by 2024

0
21
Elon Musk
Elon Musk, founder, CEO and lead designer at SpaceX. voa

Adelaide, Sep 29: Elon Musk, founder of SpaceX on Friday unveiled its plans to put humans on Mars as early as 2024.

Speaking on the final day of the 68th International Astronautical Congress (IAC) here, SpaceX CEO Elon Musk made the announcement of the plans, reports Xinhua news agency.

Musk, who also serves as the CEO of automotive company Tesla, said SpaceX was aiming for cargo missions to the Red Planet in 2022 and crew with cargo by 2024.

He said that missions to Mars would be launched every two years from 2022 onwards with colonisation and terraforming to begin as soon as the first humans arrive in order to make it “a really nice place to be”.

“It’s about believing in the future, and thinking that the future will be better than the past,” Musk said.

SpaceX also announced its new BFR rocket on Friday.

“I can’t emphasise enough how profound this is, and how important this is,” Musk told the Congress as the keynote speaker on the final day.

The new BFR has the highest capacity payload of any rocket ever built, meaning it has the lowest launch cost, due to its status as a fully reusable rocket while also being the most powerful.

“It’s really crazy that we build these sophisticated rockets and then crash them every time we fire,” Musk said.

He said that the new BFR could carry a 40-carriage spaceship to Mars with two or three people occupying each carriage.

The rocket is capable of flying from Earth to the Moon and back without refuelling, making creating a base on the Moon, dubbed Moon Base Alpha, achievable in near future.

SpaceX intends for the new, scaled-down BFR to replace its other flagship rockets, the Dragon, Falcon 9 and Falcon Heavy.

Musk said the BFR could even be used for international flights on Earth, promising to cut most long-distance Earth flights to just half an hour.

He said the rocket could travel from New York City to Shanghai in 37 minutes at a maximum speed of 18,000 miles (28,968 km) per hour.

Funding for BFR development will come from SpaceX’s satellite and International Space Station (ISS) revenue.

SpaceX’s announcement came hours after Lockheed Martin revealed new technology that would see it land on Mars in partnership with NASA by 2030.

SpaceX estimated this year that a permanent, self-sustaining colony on Mars was 50 to 100 years away.(IANS)

Next Story

Facebook’s Artificial Intelligence Shut Down After It Creates Its Own Language

Mark Zuckerberg doesn't understand Artifical Intelligence, says Tech Business Magnet, Elon Musk

0
97
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay
  • Facebook shuts down AI program after the robot starts developing its own language to decide conclusion of the task
  • Researchers identify this when they find two bots in the lab having a seemingly gibberish exchange, which actually has semantic meanings
  • Tech business magnet, Elon Musk says that The Facebook CEO has a limited understanding of AI technology

New Delhi, August 2, 2017: Researchers at Facebook had to shut down the Artificial Intelligence program after the robot started to create its own language. It developed a system of code words to make the communication more efficient.

According to the Digital Media report, this one incident at Facebook is not first of its kind to have happened while working on AI programs, even in the past, an AI robot has diverged from its training to communicate in English and developed its own language. To a common man, that language may come off as “gibberish” but they contain semantic meaning when deciphered by experts and the AI ‘agents’.

The researchers at Facebook noticed that their AI bot had given up on English and the new language it created was capable of communicating with other AI bots and deciding the future course of action as well. The language which at first appeared unintelligent to the researchers actually represented a task at hand, and a possible conclusion on how to proceed. They noticed this when two bots in the lab began to exchange with each other.

AI developers at other companies have observed a similar use of “shorthands” to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.

At Google, the team working on the Translate service discovered that the AI they programmed had silently written its own language to aid in translating sentences.

Also Read: BENEFITS & RISKS OF ARTIFICIAL INTELLIGENCE

The Translate developers had added a neural network to the system, making it capable of translating between language pairs it had never been taught. The new language the AI silently wrote was a surprise.

This incident with Facebook’s AI failure made more news when Elon Musk, the founder of OpenAI made remarks on Zuckerberg’s AI faux pas. In an altercation at twitter, Elon said in one of his tweets that, Facebook CEO doesn’t have much understanding of the AI technology.

Facebook's AI creates own language; Forces Shutdown Click To Tweet

To which Mark Zuckerberg in a Live Q&A session responded, “Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used”.

There is not enough evidence to claim that these unforeseen AI divergences are a threat or that they could lead to machines taking over operators. They do make development more difficult, however, because people are unable to grasp the overwhelmingly logical nature of the new languages. In Google’s case, for example, the AI had developed a language that no human could grasp but was potentially the most efficient known solution to the problem.

Prepared by Nivedita Motwani. Twitter @Mind_Makeup


NewsGram is a Chicago-based non-profit media organization. We depend upon support from our readers to maintain our objective reporting. Show your support by Donating to NewsGram. Donations to NewsGram are tax-exempt.

Click Here: www.newsgram.com/donate

Next Story

11-year-old Indian-origin Arnav Sharma beats Albert Einstein, Stephen Hawking in Mensa IQ test in UK

Wonder boy Arnav Sharma gained a score of 162 -- the maximum possible result you can achieve on the paper

0
236
Arnav Sharma
Arnav Sharma, Wikimedia
  • Arnav Sharma, from Reading town in southern England, passed the infamously difficult Mensa IQ test a few weeks back with zero preparation
  • His mark in the exam, which primarily measures verbal reasoning ability, puts him in the top one per cent of the nation in terms of IQ level
  • The “genius benchmark” is set at 140 and Sharma gained a score of 162 — the maximum possible result you can achieve on the paper

London, July 1, 2017: An 11-year-old Indian-origin boy here has scored 162 in the prestigious Mensa IQ test, two points higher than geniuses Albert Einstein and Stephen Hawking.

Arnav Sharma, from Reading town in southern England, passed the infamously difficult test a few weeks back with zero preparation. Mensa IQ test was developed in Britain to form an elite society of intelligent people, the Independent reported.

The “genius benchmark” is set at 140 and Sharma gained a score of 162 — the maximum possible result you can achieve on the paper.

His mark in the exam, which primarily measures verbal reasoning ability, puts him in the top one per cent of the nation in terms of IQ level.

ALSO READ: Sikh community in London helps deadly Grenfell Tower fire Survivors

“The Mensa test is quite hard and not many people pass it, so do not expect to pass,” Sharma told the daily.

Sharma said: “I had no preparation at all for the exam but I was not nervous. My family were surprised but they were also very happy when I told them about the result.”

The boy’s mother, Meesha Dhamija Sharma, said she kept her “fingers crossed” during his exam.

“I was thinking what is going to happen because you never know and he had never seen what a paper looks like,” she said.

Sharma said his hobbies are coding, badminton, piano, swimming and reading. He also has an unusually good geographical knowledge and can name all the capitals of the world.

A spokesperson for Mensa praised the 11-year-old boy, saying: “It is a high mark which only a small percentage of people in the country will achieve.”

Mensa was founded in 1946 in Oxford by Lancelot Lionel Ware, a scientist and lawyer, and Roland Berrill, an Australian barrister, but the organisation later spread around the world.

Its mission is to “identify and foster human intelligence for the benefit of humanity”. (IANS)