Saturday December 7, 2019
Home Lead Story Facebook Unve...

Facebook Unveils Three-pronged Strategy to Fight Fake News

Apart from this, Facebook is also using machine learning to help its teams detect fraud and enforce its policies against spam

0
//
Facebook, video chat
LinkedIn faced probe for Facebook ads targeting 18 mn non-members. Pixabay

To stop false news from spreading on its platform, Facebook has said it put in place a three-pronged strategy that constitutes removing accounts and content that violate its policies, reducing distribution of inauthentic content and informing people by giving them more context on the posts they see.

Another part of its strategy in some countries is partnering with third-party fact-checkers to review and rate the accuracy of articles and posts on Facebook, Tessa Lyons, a Facebook product manager on News Feed focused on false news, said in a statement on Thursday.

The social media giant is facing criticism for its role in enabling political manipulation in several countries around the world. It has also come under the scanner for allegedly fuelling ethnic conflicts owing to its failure stop the deluge of hate-filled posts against the disenfranchised Rohingya Muslim minority in Myanmar.

Representational image.
Representational image. Pixabay

“False news is bad for people and bad for Facebook. We’re making significant investments to stop it from spreading and to promote high-quality journalism and news literacy,” Lyons said.

Facebook CEO Mark Zuckerberg on Tuesday told the European Parliament leaders that the social networking giant is trying to plug loopholes across its services, including curbing fake news and political interference on its platform in the wake of upcoming elections globally, including in India.

Lyons said Facebook’s three-pronged strategy roots out the bad actors that frequently spread fake stories.

Also Read: Facebook Planning to Increase Their Capability Through Smartphones

“It dramatically decreases the reach of those stories. And it helps people stay informed without stifling public discourse,” Lyons added.

Although false news does not violate Facebook’s Community Standards, it often violates the social network’s polices in other categories, such as spam, hate speech or fake accounts, which it removes remove.

“For example, if we find a Facebook Page pretending to be run by Americans that’s actually operating out of Macedonia, that violates our requirement that people use their real identities and not impersonate others. So we’ll take down that whole Page, immediately eliminating any posts they made that might have been false,” Lyons explained.

Lyons said Facebook's three-pronged strategy roots out the bad actors that frequently spread fake stories.
Lyons said Facebook’s three-pronged strategy roots out the bad actors that frequently spread fake stories. Pixabay

Apart from this, Facebook is also using machine learning to help its teams detect fraud and enforce its policies against spam.

“We now block millions of fake accounts every day when they try to register,” Lyons added.

A lot of the misinformation that spreads on Facebook is financially motivated, much like email spam in the 90s, the social network said.

If spammers can get enough people to click on fake stories and visit their sites, they will make money off the ads they show.

Also Read: Facebook Lets Advertisers Target Users Based on Sensitive Interests

“We’re figuring out spammers’ common tactics and reducing the distribution of those kinds of stories in News Feed. We’ve started penalizing clickbait, links shared more frequently by spammers, and links to low-quality web pages, also known as ‘ad farms’,” Lyons said.

“We also take action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution,” Lyons said.

Facebook said it does not want to make money off of misinformation or help those who create it profit, and so such publishers are not allowed to run ads or use its monetisation features like Instant Articles. (IANS)

Next Story

Fake News Spreads Like Wildfire On Social Media

Misinformation can stoke political polarisation and undermine democracy

0
Fake news on social media
The researchers noted that efforts to curtail misinformation typically focus on helping people distinguish fact from fiction. Pixabay

Researchers, including one of Indian-origin, have found that people who repeatedly encounter a fake news item may feel less unethical about sharing it on social media, even when they don’t believe the information, according to a new study.

In a series of experiments involving more than 2,500 people, the study published in the journal Psychological Science, found that seeing a fake headline just once leads individuals to temper their disapproval of the misinformation when they see it a second, third, or fourth time.

“The findings have important implications for policymakers and social media companies trying to curb the spread of misinformation online,” said study researcher Daniel A. Effron from the London Business School.

“We suggest that efforts to fight misinformation should consider how people judge the morality of spreading it, not just whether they believe it,” Effron added.

Across five experiments, Effron and researcher Medha Raj asked online survey participants to rate how unethical or acceptable they thought it would be to publish a fake headline, and how likely they would be to “like”, share, and block or unfollow the person who posted it.

As they expected, the researchers found that participants rated headlines they had seen more than once as less unethical to publish than headlines they were shown for the first time.

Fake news
Facebook Adds New Measures to Enforce Targeting Restrictions on Potentially Discriminatory Ad Types. Pixabay

Participants also said they were more likely to ‘like’ and share a previously seen headline and less likely to block or unfollow the person who posted it.

What’s more, they did not rate the previously seen headline as significantly more accurate than the new ones, the researchers said.

The researchers noted that efforts to curtail misinformation typically focus on helping people distinguish fact from fiction.

Facebook, for example, has tried informing users when they try to share news that fact-checkers have flagged as false.

But such strategies may fail if users feel more comfortable sharing misinformation they know is fake when they have seen it before.

The researchers theorise that repeating misinformation lends it a ‘ring of truthfulness’ that can increase people’s tendency to give it a moral pass, regardless of whether they believe it.

ALSO READ: Russia-Backed YouTube Channels Spread Disinformation, Generates Millions of Dollars in Ad Revenue

“The results should be of interest to citizens of contemporary democracies,” Effron said.

“Misinformation can stoke political polarisation and undermine democracy, so it is important for people to understand when and why it spreads,” Effron added. (IANS)