Tuesday April 24, 2018
Home Lead Story Snapchat trol...

Snapchat trolls Facebook with Russian bot filter

Facebook has been facing intense criticism after it emerged that data of about 50 million users had been harvested and passed on to political consultancy firm Cambridge Analytica

Snapchat rolls out new feature to make it easier to find events and friend's snap stories. IANS
Snapchat sued by Blackberry . IANS
  • Snapchat trolled Facebook this April Fool’s Day
  • The site did it by introducing filters of Russian bots
  • Facebook is in deep waters nowadays

Photo-sharing platform Snapchat on April Fools’ Day trolled Facebook by introducing a filter that makes it appear as if a Russian bot has liked your post.

The filter places a Facebook user interface around a user’s photo with Cyrillic script-like text and even includes likes from “your mum” and “a bot”, The Verge reported late on Sunday.

Facebook has been in deep waters since the leak of user’s data. Pixabay

Snapchat’s filter was only available on April Fools’ Day. The filter targeted Facebook following reports that said more than 50,000 bots on Facebook, with links to the Russian government, were used to influence the 2016 US presidential election.

Also Read: Facebook shuts down accounts owned by Russia-based IRA

Last year, Snapchat trolled Facebook with a filter that gets like from “my_mom” showing Facebook’s older-skewing user base. Facebook has been facing intense criticism after it emerged that data of about 50 million users had been harvested and passed on to political consultancy firm Cambridge Analytica. IANS

Click here for reuse options!
Copyright 2018 NewsGram

Next Story

Facebook Takes Action on The Terror-Related Content

Facebook took action on 1.9mn terror-related content

Facebook page.
Facebook. Pixabay

Facebook took action on 1.9 million pieces of content related to the Islamic State (IS) and Al Qaeda in the first quarter of 2018, twice as much as the last quarter of 2017.

The key part is that Facebook found the vast majority of this content on its own.

“In Q1 2018, 99 per cent of the IS and Al Qaeda content we took action on was not user reported,” Monika Bickert, Vice President of Global Policy Management at Facebook, said in a blog post late on Monday.

“Taking action” means that Facebook removed the vast majority of this content and added a warning to a small portion that was shared for informational or counter speech purposes.

The Facebook's image.
Facebook. Pixabay

“This number likely understates the total volume, because when we remove a profile, Page or Group for violating our policies, all of the corresponding content becomes inaccessible.

But we don’t go back through to classify and label every individual piece of content that supported terrorism,” explained Brian Fishman, Global Head of Counterterrorism Policy at Facebook.

Facebook now has a counter-terrorism team of 200 people, up from 150 in June 2017.

Also Read: British Campaigner Sues Facebook Over Fake Ads

“We have built specialised techniques to surface and remove older content. Of the terrorism-related content we removed in Q1 2018, more than 600,000 pieces were identified through these mechanisms,” the blog post said.

“We’re under no illusion that the job is done or that the progress we have made is enough,” said Facebook.

“Terrorist groups are always trying to circumvent our systems, so we must constantly improve,” the company added.  IANS

Next Story