Monday September 23, 2019
Home Lead Story Child Nudity ...

Child Nudity Tackles By Facebook By Removing Posts

NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

0
//
Facebook, child nudity
institute will help advance the growing field of ethical research on new technology and will explore fundamental issues affecting the use and impact of AI. VOA

Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.

The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.

A similar system also disclosed Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s global head of safety Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

Facebook, Child nudity
This photo shows a Facebook app icon on a smartphone in New York. VOA

The company is exploring applying the same technology to its Instagram app.

Under pressure from regulators and lawmakers, Facebook has vowed to speed up removal of extremist and illicit material.

Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.

Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.

Davis said the child safety systems would make mistakes but users could appeal.

“We’d rather err on the side of caution with children,” she said.

Facebook, Child nudity
A protester wearing a mask with the face of Facebook founder Mark Zuckerberg is flanked by two fellow activists wearing angry face emoji masks, during a protest against Facebook policies, in London, Britain (From archives) VOA

Facebook’s rules for years have banned even family photos of lightly clothed children uploaded with “good intentions,” concerned about how others might abuse such images.

Before the new software, Facebook relied on users or its adult nudity filters to catch child images. A separate system blocks child pornography or child nudity that has previously been reported to authorities.

Facebook has not previously disclosed data on child nudity removals, though some would have been counted among the 21 million posts and comments it removed in the first quarter for sexual activity and adult nudity.

Facebook said the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

child nudity, facebook
Facebook’s head of global safety policy Antigone Davis speaks during an event at the White House. VOA

Protecting minors

The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children, Davis said.

Michelle DeLaune, chief operating officer at the National Center for Missing and Exploited Children (NCMEC), said the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.

With the increase, NCMEC said it is working with Facebook to develop software to decide which tips to assess first.

Still, DeLaune acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

Also Read: Facebook Rolls Out A Simplified Version Of Messenger

Encryption of messages on Facebook-owned WhatsApp, for example, prevents machine learning from analyzing them.

DeLaune said NCMEC would educate tech companies and “hope they use creativity” to address the issue. (VOA)

Next Story

Social Networking Giant Facebook Suspends Several Apps Post-Cambridge Analytica Probe

Facebook has also removed a number of application programming interfaces (APIs), the channels that developers use to access various types of data

0
Corporate, America, Climate Change
FILE - In this April 30, 2019, file photo, Facebook stickers are laid out on a table at F8, Facebook's developer conference in San Jose, Calif. The Boston-based renewable energy developer Longroad Energy announced in May that Facebook is building a… VOA

Facebook has suspended thousands of apps associated with nearly 400 developers for a variety of reasons, as it continues to investigate suspicious apps after the Cambridge Analytica data scandal.

The social networking giant said that it is not yet confirmed these apps were posing a threat to people.

“Many were not live but were still in their testing phase when we suspended them. It is not unusual for developers to have multiple test apps that never get rolled out.

“In many cases, the developers did not respond to our request for information so we suspended them, honouring our commitment to take action,” Facebook said in a blog post on Friday.

Facebook began its “App Developer Investigation” in March 2018 as part of its response to the Cambridge Analytica data scandal.

The company aimed to review all of the apps that had access to large amounts of information before it changed its platform policies in 2014.

“Our App Developer Investigation is by no means finished. But there is meaningful progress to report so far. To date, this investigation has addressed millions of apps,” Facebook said.

Social Media, Facebook, Authenticity, Posts
The social media application, Facebook is displayed on Apple’s App Store, July 30, 2019. VOA

In a few cases, Facebook has banned some apps completely.

“That can happen for any number of reasons including inappropriately sharing data obtained from us, making data publicly available without protecting people’s identity or something else that was in clear violation of our policies,” the company said.

In May, Facebook filed a lawsuit in California against Rankwave, a South Korean data analytics company that failed to cooperate with its investigation.

Also Read: Tanzania Refuses to Provide Detailed Information on Ebola Cases

“We’ve also taken legal action against developers in other contexts. For example, we filed an action against LionMobi and JediMobi, two companies that used their apps to infect users’ phones with malware in a profit-generating scheme,” it added.

Facebook has also removed a number of application programming interfaces (APIs), the channels that developers use to access various types of data.

“We have clarified that we can suspend or revoke a developer’s access to any API that it has not used in the past 90 days. And we will not allow apps on Facebook that request a disproportionate amount of information from users relative to the value they provide,” the company said. (IANS)