Sunday January 26, 2020
Home Lead Story Machine Learn...

Machine Learning and AI: The Puzzle is Not Solved Yet

Fifty years ago, a chess-playing programme was considered a form of AI

0
//
Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm Affectiva, is pictured in Boston, April 23, 2018. Affectiva builds face-scanning technology for detecting emotions, but its founders decline business opportunities that involve spying on people.
Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm Affectiva, is pictured in Boston, April 23, 2018. Affectiva builds face-scanning technology for detecting emotions, but its founders decline business opportunities that involve spying on people. VOA

By Nishant Arora

The most buzzed-about disruptive technologies that are changing business landscapes today are Machine Learning (ML) and Artificial Intelligence (AI). Almost all of us have heard or read about them but do we actually know what the fuss is all about?

The enterprises are trying to harness the explosion of digital data and computational power with advanced algorithms to enable collaborative and natural interactions between people and machines.

However, there’s still a lot of confusion within the public and the media regarding what is ML and AI.

People prefer to write AL and ML technologies — and not ML and AI — and the argument goes that the former syncs well with the human mind.

Both the terms are often being used as synonyms and in some cases as discrete, parallel advancements.

In reality, ML is to AI what neurons are to human brain. Let us start with ML.

According to Roberto Iriondo, Editor of Machine Learning Department at Carnegie Mellon University in Pennsylvania, ML is a branch of AI.

As coined by computer scientist and machine learning pioneer Tom M. Mitchell, “ML is the study of computer algorithms that allow computer programmes to automatically improve through experience”.

For instance, if you provide an ML model with songs that you enjoy, along with audio statistics (dance-ability, instrumentality, tempo or genre), it will be able to automate and generate a system to suggest you music that you’ll enjoy in the future, similarly as to what Netflix, Spotify and other companies do.

“In a simple example, if you load an ML programme with a considerable large data-set of X-ray pictures along their description (symptoms etc), it will have the capacity to assist (or perhaps automatise) the data analysis of X-ray pictures later on,” said Iriondo.

The ML model will look at each one of the pictures in the data-set, and find common patterns in pictures that have been labelled with comparable indications.

Shanghai,
Rana el Kaliouby, CEO of the Boston-based artificial intelligence firm Affectiva, demonstrates the company’s facial recognition technology, in Boston, April 23, 2018. VOA

AI, on the other hand, is exceptionally wide in scope and is a system in itself and not just independent data models.

In simpler terms, AI means creating computers that behave in the way humans do.

However, according to Theo van Kraay, Cloud Solution Architect (Advanced Analytics & AI), Customer Success Unit at Microsoft, any attempt to try to define AI is somewhat futile, since we would first have to properly define “intelligence”, a word which conjures a wide variety of connotations.

“Firstly, it is interesting and important to note that the technical difference between what used to be referred to as AI over 20 years ago and traditional computer systems, is close to zero,” says van Kraay.

What AI systems today are doing reflects an important characteristic of human beings which separates us from traditional computer systems – human beings are prediction machines.

Many AI systems today, like human beings, are mostly sophisticated prediction machines.

“The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various (ML) models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence,” van Kraay said.

Most ML algorithms are trained on static data sets to produce predictive models, so ML algorithms only facilitate part of the dynamic in the definition of AI.

Fifty years ago, a chess-playing programme was considered a form of AI.

Also Read- ISRO to Launch Radar Imaging Satellite Soon

Today, a chess game would be considered dull and antiquated, due to the fact that it can be found on almost every computer.

“AI today is symbolised with human-AI interaction gadgets like Google Home, Apple Siri and Amazon Alexa or ML-powered video prediction systems that power Netflix, Amazon and YouTube,” says Iriondo.

In contrast to ML, AI is a moving target and its definition changes as its related technological advancements turn out to be further developed.

“Possibly, within a few decades, today’s innovative AI advancements will be considered as dull as flip-phones are to us right now,” quips Iriondo. (IANS)

Next Story

AI-based Google Model Beats Humans in Detecting Breast Cancer

This work, said Google, is the latest strand of its research looking into detection and diagnosis of breast cancer, not just within the scope of radiology, but also pathology

0
Google, smart compose
The Google name is displayed outside the company's office in London, Britain. VOA

In a ray of hope for those who have to go for breast cancer screening and even for healthy women who get false alarms during digital mammography, an Artificial Intelligence (AI)-based Google model has left radiologists behind in spotting breast cancer by just scanning the X-ray results.

Reading mammograms is a difficult task, even for experts, and can often result in both false positives and false negatives.

In turn, these inaccuracies can lead to delays in detection and treatment, unnecessary stress for patients and a higher workload for radiologists who are already in short supply, Google said in a blog post on Wednesday.

Google’s AI model spotted breast cancer in de-identified screening mammograms (where identifiable information has been removed) with greater accuracy, fewer false positives and fewer false negatives than experts.

“This sets the stage for future applications where the model could potentially support radiologists performing breast cancer screenings,” said Shravya Shetty, Technical Lead, Google Health.

Digital mammography or X-ray imaging of the breast, is the most common method to screen for breast cancer, with over 42 million exams performed each year in the US and the UK combined.

“But despite the wide usage of digital mammography, spotting and diagnosing breast cancer early remains a challenge,” said Daniel Tse, Product Manager, Google Health.

Together with colleagues at DeepMind, Cancer Research UK Imperial Centre, Northwestern University and Royal Surrey County Hospital, Google set out to see if AI could support radiologists to spot the signs of breast cancer more accurately.

The findings, published in the journal Nature, showed that AI could improve the detection of breast cancer.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

Google AI model was trained and tuned on a representative data set comprised of de-identified mammograms from more than 76,000 women in the UK and more than 15,000 women in the US, to see if it could learn to spot signs of breast cancer in the scans.

The model was then evaluated on a separate de-identified data set of more than 25,000 women in the UK and over 3,000 women in the US.

“In this evaluation, our system produced a 5.7 per cent reduction of false positives in the US, and a 1.2 per cent reduction in the UK. It produced a 9.4 per cent reduction in false negatives in the US, and a 2.7 per cent reduction in the UK,” informed Google.

The researchers then trained the AI model only on the data from the women in the UK and then evaluated it on the data set from women in the US.

In this separate experiment, there was a 3.5 per cent reduction in false positives and an 8.1 per cent reduction in false negatives, “showing the model’s potential to generalize to new clinical settings while still performing at a higher level than experts”.

Notably, when making its decisions, the model received less information than human experts did.

The human experts (in line with routine practice) had access to patient histories and prior mammograms, while the model only processed the most recent anonymized mammogram with no extra information.

Also Read: Bollywood Celebrities Enjoying the Holiday Season

Despite working from these X-ray images alone, the model surpassed individual experts in accurately identifying breast cancer.

This work, said Google, is the latest strand of its research looking into detection and diagnosis of breast cancer, not just within the scope of radiology, but also pathology.

“We’re looking forward to working with our partners in the coming years to translate our machine learning research into tools that benefit clinicians and patients,” said the tech giant. (IANS)