Saturday December 7, 2019
Home Lead Story Facebook Shar...

Facebook Shares Data on Child Nudity, Terrorism, Drug Sales on Instagram

On spread of hate speech on its platforms, Facebook said it can detect such harmful content before people report it and, sometimes, before anyone sees it

0
//
Social Media, Facebook, Authenticity, Posts
The social media application, Facebook is displayed on Apple's App Store, July 30, 2019. VOA

Facebook has shared for the first time data on how it takes action against child nudity and child sexual exploitation, terrorist propaganda, illicit firearm and drug sales and suicide and self-injury on its photo-sharing app Instagram.

In Q2 2019, Facebook removed about 512,000 pieces of content related to child nudity and child sexual exploitation on Instagram.

“In Q3 (July-September period), we saw greater progress and removed 754,000 pieces of content, of which 94.6 per cent we detected proactively,” Guy Rosen, VP Integrity, said in a statement on Wednesday.

It is ironic that Instagram has also become a platform, like Facebook, for such acts.

“For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram,” Rosen explained.

In its “Community Standards Enforcement Report, November 2019,” the social networking platform said it has been detecting and removing content associated with Al Qaeda, ISIS and their affiliates on Facebook above 99 per cent.

“The rate at which we proactively detect content affiliated with any terrorist organisation on Facebook is 98.5 per cent and on Instagram is 92.2 per cent,” informed the company.

facebook privacy
FILE – The Instagram icon is displayed on a mobile screen in Los Angeles. VOA

In the area of suicide and self-injury, Facebook took action on about 2 million pieces of content in Q2 2019.

“We saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3 per cent we detected proactively.

“On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8 per cent we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1 per cent we detected proactively,” said Rosen.

In Q3 2019, Gacebook removed about 4.4 million pieces of drug sale content. It removed about 2.3 million pieces of firearm sales content in the same period.

Also Read: Tech Giant Apple Launches its All-new 16-inch MacBook Pro

On Instagram, the company removed about 1.5 million pieces of drug sale content and 58,600 pieces of firearm sales content.

On spread of hate speech on its platforms, Facebook said it can detect such harmful content before people report it and, sometimes, before anyone sees it.

“With these evolutions in our detection systems, our proactive rate has climbed to 80 per cent, from 68 per cent in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy,” said Rosen. (IANS)

Next Story

Social Media Giant Facebook Sues Chinese Company Over Alleged ad Fraud

According to a report in CNET, Facebook said it has paid more than $4 million in reimbursements to victims of these hacks

0
facebook, WhatsApp, stories, feature
An iPhone displays the app for Facebook in New Orleans, Aug. 11, 2019. VOA

Facebook has sued a Chinese company for allegedly tricking people into installing a malware, compromising peoples accounts and then using them to run deceptive ads.

Facebook blamed ILikeAd Media International Company Ltd. and two individuals associated with the company — Chen Xiao Cong and Huang Tao – for the fraud.

The defendants deceived people into installing malware available on the Internet. This malware then enabled the defendants to compromise people’s Facebook accounts and run deceptive ads promoting items such as counterfeit goods and diet pills, the social media giant said in a blog post.

The defendants sometimes used images of celebrities in their ads to entice people to click on them, a practice known as “celeb bait”, according to the lawsuit filed on Wednesday.

In some instances, the defendants also engaged in a practice known as cloaking, Facebook said.

Social Media, Facebook, Authenticity, Posts
The social media application, Facebook is displayed on Apple’s App Store, July 30, 2019. VOA

“Through cloaking, the defendants deliberately disguised the true destination of the link in the ad by displaying one version of an ad’s landing page to Facebook’s systems and a different version to Facebook users,” said Facebook’s Jessica Romero, Director of Platform Enforcement and Litigation and Rob Leathern, Director of Product Management, Business Integrity.

Cloaking schemes are often sophisticated and well organised, making the individuals and organisations behind them difficult to identify and hold accountable.

Also Read: New Account of Twitter named @TwitterRetweets to Highlight Best Tweets

As a result, there have not been many legal actions of this kind.

“In this case, we have refunded victims whose accounts were used to run unauthorised ads and helped them to secure their accounts,” they wrote.

According to a report in CNET, Facebook said it has paid more than $4 million in reimbursements to victims of these hacks. (IANS)