Sunday July 21, 2019
Home Lead Story Here’s ...

Here’s Why LinkedIn Relies on Users, Not AI, for Removing Fake Profiles

It, however, appears that LinkedIn relies more on users than its AI and ML solutions to keep its platform sanitised

0
//
LinkedIn, LinkedIn trending stories, Twitter
Just as in Twitter, on the LinkedIn mobile app, members can find the day's top stories on tapping inside the search bar. Pixabay

In February, an Indian Administrative Service (IAS) officer B. Chandrakala found a LinkedIn fake account running in her name. After registering a case under the Information Technology (IT) Act, the police swung into action and get LinkedIn to shut that face account.

Under investigation by the Central Bureau of Investigation (CBI) in an illegal mining case in Uttar Pradesh, Chandrakala was shocked to see the fake account being run on LinkedIn in her name using her photograph and designation and publishing objectionable obscene content.

Not just fake accounts, there have been several cases of fraudsters impersonating staffing agencies on the LinkedIn platform and people keeping duplicate and fake profiles.

The goal of such people, according to Bruce Johnston, a famed LinkedIn sales and marketing consultant, is to harvest email addresses from connections, identity theft, phishing, spear phishing and other scams and impersonation.

LinkedIn, which has over 54 million users in India which is its fastest growing market outside of the US, claims it is good at stamping out fake profiles once they are identified.

But the real game is to identify such problems firsthand — via Artificial Intelligence (AI)-enabled algorithms which the company has invested heavily in — in order to weed out bad actors quickly and act proactively, without waiting for users to flag such content.

Human-centric AI and Machine Learning (ML) is helping — to a great extent — Facebook, Twitter and Google stamp out bad content, terror-related posts, political interference, misinformation, abuse and several other inauthentic behaviours even before users flag them.

“LinkedIn is pretty good at stamping out fake profiles once they are identified. But as fake profiles can be replaced just as quickly as they are detected and stamped out, this is a real problem,” wrote Johnston in a blog post some time back.

India has witnessed nearly 80 per cent growth in Human Resource (HR) analytics professionals in the past five years, global professional network site LinkedIn said on Tuesday.
LinkedIn reports that HR professional number grew by 80% in last 5 years in India. Pixabay

LinkedIn does not have a satisfactory answer when it comes to identifying a person who is between jobs or joined at some other place but keeps his old profile on LinkedIn.

“Members come to LinkedIn to connect with their community, learn from each other and access opportunity. The best way to do that is to keep their profile updated, including sharing news and insights,” says the Microsoft-owned platform.

LinkedIn gives users option to flag inappropriate or fake profiles on its platform – profiles that contain profanity, empty profiles with fake names, or profiles that are impersonating public figures.

The company told IANS that while there may be multiple reasons why members take more time to update their profiles, it is possible for other members to report inaccurate information.

Also Read- Toddler Locks Father’s iPad for Nearly Half a Century

“We take each report very seriously and our team reviews each case individually. If the information is inaccurate, we take action, which can include removing the content,” said a LinkedIn spokesperson.

Specifically for fake accounts, said LinkedIn, we investigate suspected violations of our Terms of Service, including the creation of false profiles, and take immediate action when violations are uncovered.

“If members use multiple email addresses to log into LinkedIn, this can lead to duplicate accounts. LinkedIn has tools in place to check for such instances and notify members to merge the duplicate accounts,” informed the company.

It, however, appears that LinkedIn relies more on users than its AI and ML solutions to keep its platform sanitised. (IANS)

Next Story

Researchers Develop AI-driven System to Curb ‘Deepfake’ Videos

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild

0
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay

At a time when “deepfake” videos become a new threat to users’ privacy, a team of Indian-origin researchers has developed Artificial Intelligence (AI)-driven deep neural network that can identify manipulated images at the pixel level with high precision.

Realistic videos that map the facial expressions of one person onto those of another — known as “deepfakes”, present a formidable political weapon in the hands of nation-state bad actors.

Led by Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside, the team is currently working on still images but this can help them detect “deepfake” videos.

“We trained the system to distinguish between manipulated and nonmanipulated images and now if you give it a new image, it is able to provide a probability that that image is manipulated or not, and to localize the region of the image where the manipulation occurred,” said Roy-Chowdhury.

A deep neural network is what AI researchers call computer systems that have been trained to do specific tasks, in this case, recognize altered images.

These networks are organized in connected layers; “architecture” refers to the number of layers and structure of the connections between them.

While this might fool the naked eye, when examined pixel by pixel, the boundaries of the inserted object are different.

For example, they are often smoother than the natural objects.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

By detecting boundaries of inserted and removed objects, a computer should be able to identify altered images.

The researchers tested the neural network with a set of images it had never seen before, and it detected the altered ones most of the time. It even spotted the manipulated region.

“If you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another,” explained Roy-Chowdhury in a paper published in the journal IEEE Transactions on Image Processing.

“The more fundamental challenge is probably figuring out whether a frame in a video is manipulated or not”.

Also Read: TikTok Testing New Features Inspired by Instagram

Even a single manipulated frame would raise a red flag.

Roy-Chowdhury, however, thinks we still have a long way to go before automated tools can detect “deepfake” videos in the wild.

“This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defense mechanisms, but then the attacker also finds better mechanisms.” (IANS)