Friday November 15, 2019
Home World Mass surveill...

Mass surveillance can not stall terrorism completely: Snowden

0
//

download (1)

By NewsGram Staff Writer

Edward Snowden, the American whistleblower who blew the lid on global surveillance programs run by the National Security Agency (NSA) has said that such vigilance programmes can never prevent terrorism fully.

Speaking at the  International Journalism Festival in Perugia on Friday, Snowden said, “Even the most extensive monitoring system would never be able to make us perfectly safe from terrorism.”

“Yet, mass surveillance is often used by intelligence agencies to spy on citizens regardless if a crime is being committed or not,” he added.

Snowden was welcomed as a chief guest by the audience attending the third day of the festival in a debate via video link broadcasted by the Ansa news agency. The festival has gathered journalists and experts from all over the world since 2006.

Snowden long served the National Security Agency (NSA), the Central Intelligence Service (CIA) and other American security agencies as technology and cyber-security expert.

In 2013, Snowden made a host of revelatory disclosures regarding a “global surveillance apparatus” run by the United States in cooperation with Australia, Canada and the United KIngdom, due to which he had to seek asylum away from the United States.

He is currently residing in an unknown location in Russia.

Next Story

Facebook Shares Data on Child Nudity, Terrorism, Drug Sales on Instagram

On spread of hate speech on its platforms, Facebook said it can detect such harmful content before people report it and, sometimes, before anyone sees it

0
Social Media, Facebook, Authenticity, Posts
The social media application, Facebook is displayed on Apple's App Store, July 30, 2019. VOA

Facebook has shared for the first time data on how it takes action against child nudity and child sexual exploitation, terrorist propaganda, illicit firearm and drug sales and suicide and self-injury on its photo-sharing app Instagram.

In Q2 2019, Facebook removed about 512,000 pieces of content related to child nudity and child sexual exploitation on Instagram.

“In Q3 (July-September period), we saw greater progress and removed 754,000 pieces of content, of which 94.6 per cent we detected proactively,” Guy Rosen, VP Integrity, said in a statement on Wednesday.

It is ironic that Instagram has also become a platform, like Facebook, for such acts.

“For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram,” Rosen explained.

In its “Community Standards Enforcement Report, November 2019,” the social networking platform said it has been detecting and removing content associated with Al Qaeda, ISIS and their affiliates on Facebook above 99 per cent.

“The rate at which we proactively detect content affiliated with any terrorist organisation on Facebook is 98.5 per cent and on Instagram is 92.2 per cent,” informed the company.

facebook privacy
FILE – The Instagram icon is displayed on a mobile screen in Los Angeles. VOA

In the area of suicide and self-injury, Facebook took action on about 2 million pieces of content in Q2 2019.

“We saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3 per cent we detected proactively.

“On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8 per cent we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1 per cent we detected proactively,” said Rosen.

In Q3 2019, Gacebook removed about 4.4 million pieces of drug sale content. It removed about 2.3 million pieces of firearm sales content in the same period.

Also Read: Tech Giant Apple Launches its All-new 16-inch MacBook Pro

On Instagram, the company removed about 1.5 million pieces of drug sale content and 58,600 pieces of firearm sales content.

On spread of hate speech on its platforms, Facebook said it can detect such harmful content before people report it and, sometimes, before anyone sees it.

“With these evolutions in our detection systems, our proactive rate has climbed to 80 per cent, from 68 per cent in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy,” said Rosen. (IANS)