Tuesday July 23, 2019
Home Lead Story New AI-based ...

New AI-based System May Counter Online Dating Frauds

In these scams, fraudsters target users of dating websites and apps, 'groom' them and then ask for gifts of money or loans which will never be returned, they noted

0
//
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay

Researchers have developed an Artificial Intelligence (AI)-based system to restrict fake profiles designed to con people on dating apps and websites.

The computing algorithms have been designed specifically to understand what fake dating profiles look like and then to apply this knowledge to scan profiles submitted to online dating services.

The algorithms, part of a wide-ranging research, have the capability to ‘think’ like humans to pinpoint fake profiles, said the researchers, including Tom Sorell, Professor at the University of Warwick in the UK.

“Using AI techniques to help reveal suspicious activity could be a game-changer that makes detection and prevention quicker, easier and more effective, ensuring that people can use dating sites with much more confidence in future,” said Sorell in a statement released on Tuesday by Engineering and Physical Sciences Research Council (EPSRC), a part of UK Research and Innovation, a non-departmental public body funded by a grant-in-aid from the UK government.

When tested, the research team found that the algorithms produced a very low false-positive rate (the number of genuine profiles mistakenly flagged up as fake) of around 1 per cent.

online-dating
A man uses the dating app Tinder in New Delhi, India. (VOA)

The new algorithms automatically look out for suspicious signs inadvertently included by fraudsters in the demographic information, the images and the self-descriptions that make up profiles, and reach an overall conclusion about the probability of each individual profile being fake.

According to the researchers, the aim is now to further enhance the technique and enable it to start being taken up by dating services within the next couple of years, helping them to prevent profiles being posted by scammers.

Also Read- Andhra Government to Give Free Smartphones to 1.4 Crore Women

With Valentine’s Day approaching, the news that these AI capabilities have the potential to help thwart so-called ‘rom-con’ scams will be very welcome to the millions of people who use online dating services in the UK and worldwide, the researchers said.

In these scams, fraudsters target users of dating websites and apps, ‘groom’ them and then ask for gifts of money or loans which will never be returned, they noted. (IANS)

Next Story

Researchers Develop AI-driven System to Curb ‘Deepfake’ Videos

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild

0
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay

At a time when “deepfake” videos become a new threat to users’ privacy, a team of Indian-origin researchers has developed Artificial Intelligence (AI)-driven deep neural network that can identify manipulated images at the pixel level with high precision.

Realistic videos that map the facial expressions of one person onto those of another — known as “deepfakes”, present a formidable political weapon in the hands of nation-state bad actors.

Led by Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside, the team is currently working on still images but this can help them detect “deepfake” videos.

“We trained the system to distinguish between manipulated and nonmanipulated images and now if you give it a new image, it is able to provide a probability that that image is manipulated or not, and to localize the region of the image where the manipulation occurred,” said Roy-Chowdhury.

A deep neural network is what AI researchers call computer systems that have been trained to do specific tasks, in this case, recognize altered images.

These networks are organized in connected layers; “architecture” refers to the number of layers and structure of the connections between them.

While this might fool the naked eye, when examined pixel by pixel, the boundaries of the inserted object are different.

For example, they are often smoother than the natural objects.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

By detecting boundaries of inserted and removed objects, a computer should be able to identify altered images.

The researchers tested the neural network with a set of images it had never seen before, and it detected the altered ones most of the time. It even spotted the manipulated region.

“If you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another,” explained Roy-Chowdhury in a paper published in the journal IEEE Transactions on Image Processing.

“The more fundamental challenge is probably figuring out whether a frame in a video is manipulated or not”.

Also Read: TikTok Testing New Features Inspired by Instagram

Even a single manipulated frame would raise a red flag.

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild.

“This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defense mechanisms, but then the attacker also finds better mechanisms.” (IANS)