Wednesday July 17, 2019
Home Lead Story Google: We Wo...

Google: We Won’t Develop Deadly AI Weapons, But Will Help The Military

Google won't deploy AI to build military weapons: Pichai

0
//
Google
Google's new Search feature gives single result to certain queries. Pixabay

After facing backlash over its involvement in an Artificial Intelligence (AI)-powered Pentagon project “Maven”, Google CEO Sundar Pichai has enphasised that the company will not work on technologies that cause or are likely to cause overall harm.

About 4,000 Google employees had signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology”.

Following the anger, Google decided not to renew the “Maven” AI project with the US Defence Department after it expires in 2019.

“We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” Pichai said in a blog post late Thursday.

“We will not pursue AI in “technologies that gather or use information for surveillance violating internationally accepted norms,” the Indian-born CEO added.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas like cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” Pichai noted.

Google CEO Sundar Pichai
Google CEO Sundar Pichai. (Wikimedia Commons)

Google will incorporate its privacy principles in the development and use of its AI technologies, providing appropriate transparency and control over the use of data, Pichai enphasised.

In a blog post describing seven “AI principles”, he said these are not theoretical concepts but “concrete standards that will actively govern our research and product development and will impact our business decisions”.

“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” Pichai posted.

Google will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where it operates.

Also Read: Diversity Issues Take Centre Stage at Google Shareholders’ Meet

“We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief,” Pichai noted.

Pichai said Google will design AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.

“We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control,” he added. (IANS)

Next Story

Researchers Develop AI Algorithm That can Solve Rubik’s Cube in Less Than a Second

According to the researchers, the ultimate goal of projects such as this one is to build the next generation of AI systems

0

Researchers have developed an AI algorithm that can solve a Rubiks Cube in a fraction of a second, faster than most humans. The work is a step toward making AI systems that can think, reason, plan and make decisions.

The study, published in the journal Nature Machine Intelligence, shows DeepCubeA — a deep reinforcement learning algorithm programmed by University of California computer scientists and mathematicians — can solve the Rubik’s Cube in a fraction of a second, without any specific domain knowledge or in-game coaching from humans.

This is no simple task considering that the cube has completion paths numbering in the billions but only one goal state – each of six sides displaying a solid colour – which apparently can not be found through random moves.

“Artificial Intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” said study author Pierre Baldi, Professor at the University of California.

“The solution to the Rubik’s Cube involves more symbolic, mathematical and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan and make decisions,” Baldi said.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

For the study, the researchers demonstrated that DeepCubeA solved 100 percent of all test configurations, finding the shortest path to the goal state about 60 per cent of the time.

The algorithm also works on other combinatorial games such as the sliding tile puzzle, Lights Out and Sokoban.

The researchers were interested in understanding how and why the Artificial Intelligence (AI) made its moves and how long it took to perfect its method.

Also Read: Amazon Alexa May Come to Windows 10’s Lock Screen

“It learned on its own, our AI takes about 20 moves, most of the time solving it in the minimum number of steps,” Baldi said.

“Right there, you can see the strategy is different, so my best guess is that the AI’s form of reasoning is completely different from a human’s,” he added.

According to the researchers, the ultimate goal of projects such as this one is to build the next generation of AI systems. (IANS)