Tuesday July 23, 2019

AI Technique to Improve Brain Scans to Predict Alzheimer’s Early

"If we diagnose Alzheimer's disease when all the symptoms have manifested, the brain volume loss is so significant that it's too late to intervene," Sohn said

0
//
sleep
Poor sleep can predict Alzheimer's Risk in elderly. Pixabay

Artificial intelligence (AI) can help improve the ability of brain imaging techniques to predict Alzheimer’s disease early, according to a study.

Researchers from the University of California in San Francisco (UCSF) trained a deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET).

They included more than 2,100 FDG-PET brain images from 1,002 patients and on an independent set of 40 imaging exams from 40 patients.

The results showed that the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.

It also achieved 100 per cent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.

“We were very pleased with the algorithm’s performance. It was able to predict every single case that advanced to Alzheimer’s disease,” said Jae Ho Sohn, from UCSF’s Radiology and Biomedical Imaging Department.

“If FDG-PET with AI can predict Alzheimer’s disease this early, beta-amyloid plaque and tau protein PET imaging can possibly add another dimension of important predictive power,” he added, in the paper detailed in the journal Radiology.

While early diagnosis of Alzheimer’s is extremely important for the treatment, it has proven to be challenging.

"The question for us now is not how to eliminate cholesterol from the brain, but about how to control cholesterol's role in Alzheimer's disease through the regulation of its interaction with amyloid-beta," Vendruscolo said.
The results showed that the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease, Pixabay

Although the cause behind the progressive brain disorder remains unconfimed yet, various research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain.

These changes can be difficult to recognise.

“If we diagnose Alzheimer’s disease when all the symptoms have manifested, the brain volume loss is so significant that it’s too late to intervene,” Sohn said.

Also Read- Eat Vegetarian Diet to Ward Away Heart Disease

“If we can detect it earlier, that’s an opportunity for investigators to potentially find better ways to slow down or even halt the disease process,” he noted.

Sohn explained that the algorithm could be a useful tool to complement the work of radiologists — especially in conjunction with other biochemical and imaging tests — in providing an opportunity for early therapeutic intervention. (IANS)

Next Story

Researchers Develop AI-driven System to Curb ‘Deepfake’ Videos

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild

0
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay

At a time when “deepfake” videos become a new threat to users’ privacy, a team of Indian-origin researchers has developed Artificial Intelligence (AI)-driven deep neural network that can identify manipulated images at the pixel level with high precision.

Realistic videos that map the facial expressions of one person onto those of another — known as “deepfakes”, present a formidable political weapon in the hands of nation-state bad actors.

Led by Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside, the team is currently working on still images but this can help them detect “deepfake” videos.

“We trained the system to distinguish between manipulated and nonmanipulated images and now if you give it a new image, it is able to provide a probability that that image is manipulated or not, and to localize the region of the image where the manipulation occurred,” said Roy-Chowdhury.

A deep neural network is what AI researchers call computer systems that have been trained to do specific tasks, in this case, recognize altered images.

These networks are organized in connected layers; “architecture” refers to the number of layers and structure of the connections between them.

While this might fool the naked eye, when examined pixel by pixel, the boundaries of the inserted object are different.

For example, they are often smoother than the natural objects.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

By detecting boundaries of inserted and removed objects, a computer should be able to identify altered images.

The researchers tested the neural network with a set of images it had never seen before, and it detected the altered ones most of the time. It even spotted the manipulated region.

“If you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another,” explained Roy-Chowdhury in a paper published in the journal IEEE Transactions on Image Processing.

“The more fundamental challenge is probably figuring out whether a frame in a video is manipulated or not”.

Also Read: TikTok Testing New Features Inspired by Instagram

Even a single manipulated frame would raise a red flag.

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild.

“This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defense mechanisms, but then the attacker also finds better mechanisms.” (IANS)