Monday June 18, 2018
Home Lead Story Satya Nadella...

Satya Nadella: Robots Won’t Make People Jobless

The Microsoft tool has the potential to help businesses make use of AI without inadvertently discriminating against certain groups of people

0
//
21
Satya Nadella: Robots Won't Make People Jobless
Satya Nadella: Robots Won't Make People Jobless. (Wikimedia Commons)
Republish
Reprint

Even in a “runaway Artificial Intelligence (AI)” scenario, robots will not render people completely jobless, Microsoft CEO Satya Nadella told The Sunday Telegraph in an interview.

People will always want a job as it gives them “dignity”, Nadella said, adding that the focus should instead be on applying AI technology ethically.

“What I think needs to be done in 2018 is more dialogue around the ethics, the principles that we can use for the engineers and companies that are building AI, so that the choices we make don’t cause us to create systems with bias … that’s the tangible thing we should be working on,” he was quoted as saying.

According to a report in MIT Technology Review on May 25, Microsoft is building a tool to automate the identification of bias in a range of different AI algorithms.

Robots won't render people jobless
Robots won’t render people jobless. Pixabay

The Microsoft tool has the potential to help businesses make use of AI without inadvertently discriminating against certain groups of people.

Although Microsoft’s new tool may not eliminate the problem of bias that may creep into Machine-Learning models altogether, it will help AI researchers catch more instances of unfairness, Rich Caruna, a senior researcher at Microsoft who is working on the bias-detection dashboard, was quoted as saying by MIT Technology Review.

“Of course, we can’t expect perfection — there’s always going to be some bias undetected or that can’t be eliminated — the goal is to do as well as we can,” he said.

Also Read: New gen robots to refuel and repair friendly satellites

In the interview with The Sunday Telegraph, Nadella also said that as Microsoft’s business model was based on customers paying for services, he believed the company was on “the right side of history”.

“Our business model is based on our customers being successful, and if they are successful they will pay us. So we are not one of these transaction-driven or ad-driven or marketplace-driven economies,” the Microsoft chief was quoted as saying. (IANS)

Click here for reuse options!
Copyright 2018 NewsGram

Next Story

Google: We Won’t Develop Deadly AI Weapons, But Will Help The Military

Google won't deploy AI to build military weapons: Pichai

0
Report: Google Needs to do More on Bridging Gender Gap
Report: Google Needs to do More on Bridging Gender Gap. Pixabay

After facing backlash over its involvement in an Artificial Intelligence (AI)-powered Pentagon project “Maven”, Google CEO Sundar Pichai has enphasised that the company will not work on technologies that cause or are likely to cause overall harm.

About 4,000 Google employees had signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology”.

Following the anger, Google decided not to renew the “Maven” AI project with the US Defence Department after it expires in 2019.

“We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” Pichai said in a blog post late Thursday.

“We will not pursue AI in “technologies that gather or use information for surveillance violating internationally accepted norms,” the Indian-born CEO added.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas like cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” Pichai noted.

Google CEO Sundar Pichai
Google CEO Sundar Pichai. (Wikimedia Commons)

Google will incorporate its privacy principles in the development and use of its AI technologies, providing appropriate transparency and control over the use of data, Pichai enphasised.

In a blog post describing seven “AI principles”, he said these are not theoretical concepts but “concrete standards that will actively govern our research and product development and will impact our business decisions”.

“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” Pichai posted.

Google will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where it operates.

Also Read: Diversity Issues Take Centre Stage at Google Shareholders’ Meet

“We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief,” Pichai noted.

Pichai said Google will design AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.

“We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control,” he added. (IANS)

Next Story