Monday December 16, 2019
Home Lead Story Researchers D...

Researchers Develop AI-driven System to Curb ‘Deepfake’ Videos

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild

0
//
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay

At a time when “deepfake” videos become a new threat to users’ privacy, a team of Indian-origin researchers has developed Artificial Intelligence (AI)-driven deep neural network that can identify manipulated images at the pixel level with high precision.

Realistic videos that map the facial expressions of one person onto those of another — known as “deepfakes”, present a formidable political weapon in the hands of nation-state bad actors.

Led by Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside, the team is currently working on still images but this can help them detect “deepfake” videos.

“We trained the system to distinguish between manipulated and nonmanipulated images and now if you give it a new image, it is able to provide a probability that that image is manipulated or not, and to localize the region of the image where the manipulation occurred,” said Roy-Chowdhury.

A deep neural network is what AI researchers call computer systems that have been trained to do specific tasks, in this case, recognize altered images.

These networks are organized in connected layers; “architecture” refers to the number of layers and structure of the connections between them.

While this might fool the naked eye, when examined pixel by pixel, the boundaries of the inserted object are different.

For example, they are often smoother than the natural objects.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

By detecting boundaries of inserted and removed objects, a computer should be able to identify altered images.

The researchers tested the neural network with a set of images it had never seen before, and it detected the altered ones most of the time. It even spotted the manipulated region.

“If you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another,” explained Roy-Chowdhury in a paper published in the journal IEEE Transactions on Image Processing.

“The more fundamental challenge is probably figuring out whether a frame in a video is manipulated or not”.

Also Read: TikTok Testing New Features Inspired by Instagram

Even a single manipulated frame would raise a red flag.

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild.

“This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defense mechanisms, but then the attacker also finds better mechanisms.” (IANS)

Next Story

New AI can Reduce Risk of Suicide Among Youth

AI can help prevent suicide among youth

0
suicide
Researchers from USC developed an AI that can prevent suicide risks among youth. Lifetime Stock

In a bid to help mitigate the risk of suicide especially among the homeless youth, a team of researchers at University of California (USC) has turned their focus towards Artificial Intelligence (AI).

Phebe Vayanos, an associate director at USC’s Center for Artificial Intelligence in Society (CAIS), and her team have been working over the last couple of years to design an algorithm capable of identifying who in a given real-life social group would be the best persons to be trained as “gatekeepers” capable of identifying warning signs of suicide and how to respond.

“Our idea was to leverage real-life social network information to build a support network of strategically positioned individuals that can ‘watch-out’ for their friends and refer them to help as needed,” Vayanos said.

Vayanos and study’s lead author Aida Rahmattalabi investigated the potential of social connections such as friends, relatives and acquaintances to help mitigate the risk of suicide.

“We want to ensure that a maximum number of people are being watched out for, taking into account resource limitations and uncertainties of open world deployment,” Vayanos said.

Youth suicide
The AI algorithm can improve the efficiency of suicide prevention trainings. Lifetime Stock

For this study, Vayanos and Rahmattalabi looked at the web of social relationships of young people experiencing homelessness in Los Angeles, given that 1 in 2 youth who are homeless have considered suicide.

“Our algorithm can improve the efficiency of suicide prevention trainings for this particularly vulnerable population,” Vayanos said.

An important goal when deploying this AI system is to ensure fairness and transparency.

“This algorithm can help us find a subset of people in a social network that gives us the best chance that youth will be connected to someone who has been trained when dealing with resource constraints and other uncertainties,” said study co-author Anthony Fulginiti.

Also Read- Yoga as Good as Aerobic Exercise for Super Brain Health

This work is particularly important for vulnerable populations, say the researchers, particularly for youth who are experiencing homelessness.

The paper is set to be presented at the 33rd Conference on Neural Information Processing Systems (NeurIPS) in Vancouver, Canada, this month. (IANS)