Monday October 22, 2018
Home Lead Story Facebook Unve...

Facebook Unveils Three-pronged Strategy to Fight Fake News

Apart from this, Facebook is also using machine learning to help its teams detect fraud and enforce its policies against spam

0
//
34
Facebook, video chat
Facebook unveils AI-powered video chat speakers amid privacy concerns. Pixabay
Republish
Reprint

To stop false news from spreading on its platform, Facebook has said it put in place a three-pronged strategy that constitutes removing accounts and content that violate its policies, reducing distribution of inauthentic content and informing people by giving them more context on the posts they see.

Another part of its strategy in some countries is partnering with third-party fact-checkers to review and rate the accuracy of articles and posts on Facebook, Tessa Lyons, a Facebook product manager on News Feed focused on false news, said in a statement on Thursday.

The social media giant is facing criticism for its role in enabling political manipulation in several countries around the world. It has also come under the scanner for allegedly fuelling ethnic conflicts owing to its failure stop the deluge of hate-filled posts against the disenfranchised Rohingya Muslim minority in Myanmar.

Representational image.
Representational image. Pixabay

“False news is bad for people and bad for Facebook. We’re making significant investments to stop it from spreading and to promote high-quality journalism and news literacy,” Lyons said.

Facebook CEO Mark Zuckerberg on Tuesday told the European Parliament leaders that the social networking giant is trying to plug loopholes across its services, including curbing fake news and political interference on its platform in the wake of upcoming elections globally, including in India.

Lyons said Facebook’s three-pronged strategy roots out the bad actors that frequently spread fake stories.

Also Read: Facebook Planning to Increase Their Capability Through Smartphones

“It dramatically decreases the reach of those stories. And it helps people stay informed without stifling public discourse,” Lyons added.

Although false news does not violate Facebook’s Community Standards, it often violates the social network’s polices in other categories, such as spam, hate speech or fake accounts, which it removes remove.

“For example, if we find a Facebook Page pretending to be run by Americans that’s actually operating out of Macedonia, that violates our requirement that people use their real identities and not impersonate others. So we’ll take down that whole Page, immediately eliminating any posts they made that might have been false,” Lyons explained.

Lyons said Facebook's three-pronged strategy roots out the bad actors that frequently spread fake stories.
Lyons said Facebook’s three-pronged strategy roots out the bad actors that frequently spread fake stories. Pixabay

Apart from this, Facebook is also using machine learning to help its teams detect fraud and enforce its policies against spam.

“We now block millions of fake accounts every day when they try to register,” Lyons added.

A lot of the misinformation that spreads on Facebook is financially motivated, much like email spam in the 90s, the social network said.

If spammers can get enough people to click on fake stories and visit their sites, they will make money off the ads they show.

Also Read: Facebook Lets Advertisers Target Users Based on Sensitive Interests

“We’re figuring out spammers’ common tactics and reducing the distribution of those kinds of stories in News Feed. We’ve started penalizing clickbait, links shared more frequently by spammers, and links to low-quality web pages, also known as ‘ad farms’,” Lyons said.

“We also take action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution,” Lyons said.

Facebook said it does not want to make money off of misinformation or help those who create it profit, and so such publishers are not allowed to run ads or use its monetisation features like Instant Articles. (IANS)

Click here for reuse options!
Copyright 2018 NewsGram

Next Story

Facebook Set up a War Room to Fight Election Interference

With the new ad architecture in place, people would be able to see who paid for a particular political ad

0
Facebook
Facebook now has a War Room to fight election interference. Pixabay

In line with its efforts to prevent misuse of its platform during elections, Facebook has set up a War Room to reduce the spread of potentially harmful content.

Facebook faced flak for not doing enough to prevent spread of misinformation by Russia-linked accounts during the 2016 US presidential election. The social networking giant has rolled out several initiatives to fight fake news and bring more transparency and accountability in its advertising since then.

The launch of the first War Room at its headquarters in Menlo Park, California, is part of the social network’s new initiatives to fight election interference on its platform.

Although Facebook opened the doors of the War Room ahead of the general elections in Brazil and mid-term elections in the US, it revealed the details only this week.

The goal behind setting up the War Room was to get the right subject-matter experts from across the company in one place so they can address potential problems identified by its technology in real time and respond quickly.

Facebook
Facebook, social media. Pixabay

“The War Room has over two dozen experts from across the company – including from our threat intelligence, data science, software engineering, research, community operations and legal teams,” Samidh Chakrabarti, Facebook’s Director of Product Management, Civic Engagement, said in a statement on Thursday.

“These employees represent and are supported by the more than 20,000 people working on safety and security across Facebook,” Chakrabarti added.

Facebook said its dashboards offer real-time monitoring on key elections issues, such as efforts to prevent people from voting, increases in spam, potential foreign interference, or reports of content that violates our policies.

The War Room team also monitors news coverage and election-related activity across other social networks and traditional media in order to identify what type of content may go viral.

These preparations helped a lot during the first round of Brazil’s presidential elections, Facebook claimed.

The social networking giant said its technology detected a false post claiming that Brazil’s Election Day had been moved from October 7 to October 8 due to national protests.

While untrue, that message began to go viral. But the team quickly detected the problem, determined that the post violated Facebook’s policies, and removed it in under an hour.

“And within two hours, we’d removed other versions of the same fake news post,” Chakrabarti said.

Facebook
Facebook App on a smartphone device. (VOA)

The team in the War Room, Facebook said, also helped quickly remove hate speech posts that were designed to whip up violence against people from northeast Brazil after the first round of election results were called.

“The work we are doing in the War Room builds on almost two years of hard work and significant investments, in both people and technology, to improve security on Facebook, including during elections,” Chakrabarti said.

Earlier this month Facebook said that it was planning to set up a task force comprising “hundreds of people” ahead of the 2019 general elections in India.

You May Also Like to Read About- McAfee Introduces New Device-to-Cloud Security Solutions

“With the 2019 elections coming, we are pulling together a group of specialists to work together with political parties,” Richard Allan, Facebook’s Vice President for Global Policy Solutions, told the media in New Delhi.

Facebook has also set a goal of bringing a transparency feature for political ads — now available in the US and Brazil — to India by March next year, Allan informed.

With the new ad architecture in place, people would be able to see who paid for a particular political ad. (IANS)