Sunday June 24, 2018
Home Lead Story Facebook, Goo...

Facebook, Google, Bing and Twitter Join The Trust Project to Help Users Combat Fake News

In their bid to combat fake news and help readers identify trustworthy news sources, Facebook, Google, Twitter and several media organisations have joined the non-partisan "The Trust Project"

0
//
77
To Combat Fake News
To Combat Fake News Facebook, Twitter , Google have joined 'The Trust Project'. PIxabay.
Republish
Reprint

San Francisco, Nov 19: In their bid to combat fake news and help readers identify trustworthy news sources, Facebook, Google, Twitter and several media organisations have joined the non-partisan “The Trust Project”.

“The Trust Project” is led by award-winning journalist Sally Lehrman of Santa Clara University’s Markkula Centre for Applied Ethics.

Starting from Friday, an icon will appear next to articles in Facebook News Feed.

When you click on the icon, you can read information on the organisations’ ethics and other standards, the journalists’ backgrounds, and how they do their work.

“Leading media companies representing dozens of news sites have begun to display ‘Trust Indicators’. These indicators, created by leaders from more than 75 news organisations also show what type of information people are reading a” news, opinion, analysis or advertising,” the university said in a statement.

Each indicator is signalled in the article and site code, providing the first standardised technical language for platforms to learn more from news sites about the quality and expertise behind journalists’ work.

“Google, Facebook, Bing and Twitter have all agreed to use the indicators and are investigating and piloting ideas about how to best to use them to surface and display quality journalism,” the university said.

German press agency DPA, The Economist, The Globe and Mail, the Independent Journal Review, Mic, Italy’s La Republica and La Stampa, Trinity Mirror and The Washington Post are among the companies starting to go live with “Trust Indicators” this month.

The Institute for Non-profit News has developed a WordPress plug-in to facilitate broader implementation by qualified publishers.

“An increasingly sceptical public wants to know the expertise, enterprise and ethics behind a news story. The Trust Indicators put tools into people’s hands, giving them the means to assess whether news comes from a credible source they can depend on,” Lehrman explained.

The eight core indicators are: Best Practices; Author Expertise; Type of Work; Citations and References; Methods; Locally Sourced; Diverse Voices and Actionable Feedback.

New organisations like the BBC and Hearst Television have collaborated in defining the “Trust Indicator” editorial and technical standards, and in developing the processes for implementing these.

“Quality journalism has never been more important,” said Richard Gingras, vice president of news products at Google.

“We hope to use the Type of Work indicator to improve the accuracy of article labels in Google News, and indicators such as Best Practices and Author Info in our Knowledge Panels.”

“The Trust Indicators will provide a new level of accessibility and insight into the news that people on Facebook see day in and day out,” said Alex Hardiman, Head of News Products at Facebook.

A growing number of news outlets are expected to display the indicators over the next six months, with a second phase of news partners beginning implementation work soon. (IANS)

Click here for reuse options!
Copyright 2017 NewsGram

Next Story

Social Media Companies Accelerating To Remove Online Hate Speech

A law providing for hefty fines for social media companies if they do not remove

0
In this Jan. 4, 2018, file photo, a man demonstrates how he enters his Facebook page as he works on his computer in Brasilia, Brazil. Facebook is once again tweaking the formula it uses to decide what people see in their news feed.
In this Jan. 4, 2018, file photo, a man demonstrates how he enters his Facebook page as he works on his computer in Brasilia, Brazil. Facebook is once again tweaking the formula it uses to decide what people see in their news feed. VOA

Social media companies Facebook, Twitter and Google’s YouTube have greatly accelerated their removals of online hate speech, reviewing over two thirds of complaints within 24 hours, new EU figures show.

The European Union has piled pressure on social media firms to increase their efforts to fight the proliferation of extremist content and hate speech on their platforms, even threatening them with legislation.

Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe.

The companies managed to meet that target in 81 percent of cases, EU figures seen by Reuters show, compared with 51 percent in May 2017 when the European Commission last monitored their compliance with the code of conduct.

EU Justice Commissioner Vera Jourova has said previously she does not want to see a removal rate of 100 percent as that could impinge on free speech. She has also said she is not in favor of legislating as Germany has done.

Social Media Companies Accelerating To Remove Online Hate Speech
Social Media Companies Accelerating To Remove Online Hate Speech, VOA

A law providing for hefty fines for social media companies if they do not remove hate speech quickly enough went into force in Germany this year.

“I do not hide that I am not in favor of hard regulation because the freedom of speech for me is almost absolute,” Jourova told reporters in December.

“In case of doubt it should remain online because freedom of expression is [in a] privileged position.”

Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 percent on Twitter.

The most common ground for hatred identified by the Commission was ethnic origins, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.

Following pressure from several European governments, social media companies stepped up their efforts to tackle extremist content online, including through the use of artificial intelligence.

The Twitter app is seen on a mobile phone in Philadelphia, April 26, 2017
The Twitter app is seen on a mobile phone in Philadelphia, April 26, 2017, VOA

Also read: Social media use may affect teenagers’ real life relationship

The Commission will likely issue a recommendation, a soft law instrument, on how companies should take down extremist content related to militant groups at the end of February, an official said, as it is less nuanced than hate speech and needs to be taken offline more quickly. (VOA)