Sunday February 17, 2019
Home Lead Story Enterprises n...

Enterprises now deploying AI technologies: Infosys

More than 1,000 business and IT leaders with decision-making power over AI solutions

0
//
picture from- www.dqindia.com
  • Enterprises are deploying AI more broadly
  • This is bringing in a positive change in their working
  • Companies are investing huge amount in AI technology

Enterprises are moving beyond the experimentation phase with Artificial Intelligence (AI) and are now deploying AI technologies more broadly, said an Infosys survey on Tuesday.

There is a fundamental shift in how enterprises operate as AI takes hold, according to the “Leadership in the Age of AI” survey.

Also Read : Artificial intelligence to amplify digital transformation: Vishal Sikka

India to deploy more AI technology, revealed Infosys survey.
India to deploy more AI technology, revealed Infosys survey.

“AI, as the research shows, is becoming core to business strategy, and is compelling business leaders to alter the way they hire, train and inspire teams, and the way they compete and foster innovation. Industry disruption from AI is no longer imminent, it is here,” Mohit Joshi, President, Infosys, said in a statement.

“The organisations that embrace AI with a clearly-defined strategy and use AI to amplify their workforce rather than replace it, will take the lead, and those that don’t will fall behind or find themselves irrelevant,” Joshi added.

Seventy three per cent respondents strongly agreed that their AI deployments have already transformed the way they do business, and 90 per cent C-level executives reported measurable benefits from AI within their organisation.

Use of AI is transforming businesses.
Use of AI is transforming businesses.

Also Read : Humanity’s days are numbered, Artificial Intelligence will cause mass extinction, warns Stephen Hawking 

Organisations are taking steps to prepare employees and business leaders for the future of work, with 53 per cent respondents indicating that their organisation has increased training in the job functions most affected by AI deployments.

More than 1,000 business and IT leaders with decision-making power over AI solutions or purchases at big organisations across seven countries were included in the survey. IANS

Next Story

Musk-founded AI Group Not to Release Software on ‘Fake News’ Fears

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies

0
Tesla CEO Elon musk, board
Tesla CEO Elon Musk. (VOA)

Elon Musk-founded non-profit Artificial Intelligence (AI) research group OpenAI has decided not to reveal its new AI software in detail, fearing the AI-based model can be misused by bad actors in creating real-looking fake news.

Dubbed as “GPT2”, the AI-based automated text generator can produce fake news articles and abusive posts after being fed with a few pieces of data.

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text and performs rudimentary reading comprehension, machine translation, question answering and summarization – “all without task-specific training,” OpenAI said in a blog post late on Thursday.

Trained on a data set of eight million web pages, “GPT2” can adapt to the style and the content of the text you feed it.

OpenAI said the AI model is so good and the risk of malicious use is so high that it is not releasing the full research to the public.

However, the non-profit has created a smaller model that lets researchers experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

Elon Musk, CEO of SpaceX. Wikimedia Commons

“We can imagine the application of these models for malicious purposes, including the following: Generate misleading news articles, impersonate others online, automate the production of abusive or faked content to post on social media and automate the production of spam/phishing content,” said OpenAI.

Today, malicious actors – some of which are political in nature – have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.

OpenAI further said that we should consider how research into the generation of synthetic images, videos, audio and text may further combine to unlock new as-yet-unanticipated capabilities for these bad actors.

Also Read- Adults With Obstructive Sleep At Greater Risk Of Cardiovascular Diseases

Musk, who is the staunch critic of AI and co-founded OpenAI in 2016, stepped down from its board in 2018.

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies. (IANS)