Saturday November 25, 2017
Home Politics The Rise of ‘...

The Rise of ‘Fake News’ in Internet Age: Influencing Election Results, Religious Sentiments and much more

For social media companies like Twitter and the rest, the ability to weed out false information or hate speech can be daunting

0
176
Representational image. VOA

November 30, 2016: A common narrative that emerged during this year’s presidential race was that of a country divided, which experts and pundits say explains the rise – and stunning electoral victory – of Republican Donald Trump over his Democratic opponent Hillary Clinton.

The other story of 2016 is the rise of so-called “fake news” and its spike on social media outlets. Facebook, in particular, has come under fire, having surpassed Google as the biggest driver of audience on all social media platforms.

NewsGram brings to you current foreign news from all over the world.

This week, Trump again invited controversy — a move now commonly called a “tweet storm” — by tweeting out a claim of voter fraud during the November election that he says denied him the popular vote without citing any evidence.

The “fake news” phenomenon has rattled the web, not to mention mainstream journalists, scholars and ordinary users of social media, many of whom are tweeting and writing op-ed columns, news stories and guides on how to spot inaccurate news stories and fake news websites.

All this has put unprecedented pressure on Facebook, where, according to an analysis by Buzzfeed News, fake election stories generated more total engagement on Facebook than top election articles from 19 major news outlets in the final three months of the election campaign.

A screenshot of a Buzzfeed News graph on "fake news" analysis (courtesy of Buzzfeed News) VOA
A screenshot of a Buzzfeed News graph on “fake news” analysis (courtesy of Buzzfeed News) VOA

The heat on Facebook founder Mark Zuckerberg prompted the company to tweak its algorithm to weed out inaccurate information, and later, as the outcry grew, publicly outline steps the company is taking to reduce what Zuckerberg called “misinformation.”

He prefaced his post with a familiar caveat:

“We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties.”

There are legitimate sites, journalists and scholars who are paying attention to the prevalence of fake news. Among them: Snopes.com, Columbia Journalism Review, The Poynter Institute and Melissa Zimdars, an assistant professor of communication and media at Merrimack University, who wrote a Google document with tips on how to spot “fake news” sites or inaccurate news stories for her students.

According to these fact-checkers, we must first understand what “fake news” is – and isn’t.

“We classify ‘fake news’ as specifically web sites that publish information that’s entirely fabricated,” said Kim LaCapria, content manager for Snopes.com, a website that tracks misinformation on the web.

“Right now ‘fake news’ is being applied to ‘slanted and/or inaccurate news,’” added LaCapria. “So there’s some conflation.”

And that conflation of what information can accurately be described as fake or misleading or maybe only partially true, coupled with the warp speed of digital platforms like Facebook and Twitter, have created a perfect storm of confusion, said University of Connecticut philosophy professor and author Michael Lynch.

“Confusion and deception is happening…. and mass confusion about the importance of things like truth follow in the wake of that deception,” said Lynch, who wrote a column in The New York Times this week about impact of “fake news” on the health of America’s political system. “And that is absolutely corrosive to democracy.”

LaCapria, like Lynch, also has seen first-hand how branding everything that is verifiably false ‘fake news’ isn’t really what is happening on social media. “One long-circulating rumor held that Hillary Clinton was fired from the Watergate investigation for lying,” LaCapria said.

“If I recall correctly, we rated it mostly false because the claim originated with someone who had changed his story over the years. But in our politics category, the news is not fake per se. It’s often false, mixture, mostly false or unproven.”

LaCapria points out distorted or false information has existed for a long time.

“This is the first real social media election we’ve ever experienced. And we had two social media candidates: [Bernie] Sanders and Trump,” she said.

President-elect Donald Trump gets ready for a question and answer session on Twitter during his campaign for the presidency. (@realdonaldtrump) VOA
President-elect Donald Trump gets ready for a question and answer session on Twitter during his campaign for the presidency. (@realdonaldtrump) VOA

“Now that people are upset about Trump, they’re looking at social media as a culprit. And it may be a mitigating factor, but this has all definitely been affecting politics hugely for many years.”

The Poynter Institute’s Alexios Mantzarlis, who leads the International Fact-Checking Network, agrees that there is a bit too much angst over “fake news.”

“Politicians distorting the truth isn’t a new phenomenon. Voters choosing politicians based on emotions rather than facts is not a new phenomenon,” Mantzarlis said in an email. “Moreover, we know from research that fact-checking can change readers’ minds.”

NewsGram brings to you top news around the world today.

For social media companies like Twitter and the rest, the ability to weed out false information or hate speech can be daunting, no matter how savvy their back-end web engineers may be.

Facebook in essence acknowledged that recoding its algorithm wasn’t enough, when Zuckerberg posted his latest statement about the spreading of misinformation on his platform.

An unidentified person types on a computer keyboard in Los Angeles, Feb. 27, 2013. VOA
An unidentified person types on a computer keyboard in Los Angeles, Feb. 27, 2013. VOA

For Lynch, who wrote “The Internet of Us: Knowing More and Understanding Less in the Age of Big Data,” a book released earlier this year, there are solutions to help combat the ease of creating “fake news” sites and spreading misinformation across the web.

“There are a lot of smart people working on social media and at universities trying to find algorithmic solutions to misleading content and confusion and deception on the Internet. Right now it’s not working,” he said. But right now I don’t think we should despair about not fixing our technology.”

In terms of fixes, Mantzarlis puts the burden on users.

“For one, headline writers could avoid repeating a baseless claim without any indication that it is unfounded.” Mantzarlis also argues that Facebook will need to hire some human beings to vet content in tandem with creating smarter back-end technology.

“The algorithm itself will have to change … to recognize that ‘fake news,’ and the pages that consistently post them, to get a reduced reach on [the Facebook] News Feed,” he said, adding that this tack will hit “fake news” purveyors where it hurts the most.

Check out NewsGram for latest international news updates.

“After all, for many the incentive to publish this content is financial and if the reach is reduced, so is their income.”

Most agree that the overwhelming noise of the Internet — and the much-heralded freedom of speech ethos that rules it — will forever include distortions of fact and outright falsehoods. But ultimately the vast majority of web content is created by people. And in Lynch’s mind, that is where the real power to spot and call out misleading information lies.

“I’ve become convinced that as I’ve gone around talking to people, including those in Silicon Valley … is that we as individuals, as people, need to start taking responsibility for what we believe. And for what we share and tweet.” (VOA)

Next Story

Facebook Introduces Digital Training and Start-up hubs in India to Promote Digital Economy

0
24
Facebook launched digital training in India
Facebook launched digital training in India.Pixaby.

New Delhi, Nov 23: Facebook on Wednesday introduced its digital training and start-up training hubs in India aimed at helping small businesses and people grow by giving them the digital skills they need to compete in today’s digital economy.

Facebook said it plans to train more than half a million people in the country by 2020 through these online training hubs, which are being rolled out first in India.

The learning curriculum which is personalised to the individual’s needs and available in English and Hindi on mobile, the social network, which is used by 217 million people in India, announced.

“We believe the best way to prepare India for a digital economy is by equipping people with the tools, knowledge, and skills they need to succeed,” said Ritesh Mehta, Head of Programmes, Facebook, India and South Asia.

To develop the learning curriculum, the social network worked with several organisations, including Digital Vidya, Entrepreneurship Development Institute of India (EDII), DharmaLife and the government’s StartupIndia initiative.

The curriculum includes vital skills for digital skill seekers and tech entrepreneurs, including how to protect their ideas, how to hire, how to go about getting funding, what regulations and legal hurdles they need to consider, how to build an online reputation, and a host of other critical skills.

This could mean teaching a small business owner how to create an online presence; helping a non-profit reach new communities and potential donors; or it could mean helping a tech entrepreneur turn their product idea into a startup through practical business advice.

Facebook said its digital training hub would provide free social and content marketing training for anyone – from students to business owners – who is looking to develop their digital knowledge and skills.

According to new research by Morning Consult in partnership with Facebook, small businesses use of digital translates into new jobs and opportunities for communities across the country.

Since 2011 Facebook has invested more than $1 billion to support small businesses globally.

The “Boost Your Business” and “SheMeansBusiness” initiatives have trained more than 60,000 small businesses, including 12,000 women entrepreneurs, in India, Facebook said. (IANS)

Next Story

Facebook, Google, Bing and Twitter Join The Trust Project to Help Users Combat Fake News

In their bid to combat fake news and help readers identify trustworthy news sources, Facebook, Google, Twitter and several media organisations have joined the non-partisan "The Trust Project"

0
23
To Combat Fake News
To Combat Fake News Facebook, Twitter , Google have joined 'The Trust Project'. PIxabay.

San Francisco, Nov 19: In their bid to combat fake news and help readers identify trustworthy news sources, Facebook, Google, Twitter and several media organisations have joined the non-partisan “The Trust Project”.

“The Trust Project” is led by award-winning journalist Sally Lehrman of Santa Clara University’s Markkula Centre for Applied Ethics.

Starting from Friday, an icon will appear next to articles in Facebook News Feed.

When you click on the icon, you can read information on the organisations’ ethics and other standards, the journalists’ backgrounds, and how they do their work.

“Leading media companies representing dozens of news sites have begun to display ‘Trust Indicators’. These indicators, created by leaders from more than 75 news organisations also show what type of information people are reading a” news, opinion, analysis or advertising,” the university said in a statement.

Each indicator is signalled in the article and site code, providing the first standardised technical language for platforms to learn more from news sites about the quality and expertise behind journalists’ work.

“Google, Facebook, Bing and Twitter have all agreed to use the indicators and are investigating and piloting ideas about how to best to use them to surface and display quality journalism,” the university said.

German press agency DPA, The Economist, The Globe and Mail, the Independent Journal Review, Mic, Italy’s La Republica and La Stampa, Trinity Mirror and The Washington Post are among the companies starting to go live with “Trust Indicators” this month.

The Institute for Non-profit News has developed a WordPress plug-in to facilitate broader implementation by qualified publishers.

“An increasingly sceptical public wants to know the expertise, enterprise and ethics behind a news story. The Trust Indicators put tools into people’s hands, giving them the means to assess whether news comes from a credible source they can depend on,” Lehrman explained.

The eight core indicators are: Best Practices; Author Expertise; Type of Work; Citations and References; Methods; Locally Sourced; Diverse Voices and Actionable Feedback.

New organisations like the BBC and Hearst Television have collaborated in defining the “Trust Indicator” editorial and technical standards, and in developing the processes for implementing these.

“Quality journalism has never been more important,” said Richard Gingras, vice president of news products at Google.

“We hope to use the Type of Work indicator to improve the accuracy of article labels in Google News, and indicators such as Best Practices and Author Info in our Knowledge Panels.”

“The Trust Indicators will provide a new level of accessibility and insight into the news that people on Facebook see day in and day out,” said Alex Hardiman, Head of News Products at Facebook.

A growing number of news outlets are expected to display the indicators over the next six months, with a second phase of news partners beginning implementation work soon. (IANS)

Next Story

Send Your own Nudes to Facebook to Stop Revenge Porn

Facebook is testing a new method to stop revenge porn that requires you to send your own nudes to yourself via the social network's Messenger app

0
61
Send your own nudes
Send your own nudes via messenger app to yourself.Pixabay.

Sydney, Nov 9: Facebook is testing a new method to stop revenge porn that requires you to send your own nudes to yourself via the social network’s Messenger app.

This strategy would help Facebook to create a digital fingerprint for the picture and mark it as non-consensual explicit media.

So if a relationship goes sour, you could take proactive steps to prevent any intimate images in possession of your former love interest from being shared widely on Facebook or instagram.

Facebook is partnering with a Australian government agency to prevent such image-based abuses, the Australia Broadcasting Corp reported.

If you’re worried your intimate photos will end up on Instagram or Facebook, you can get in contact with Australi’s e-Safety Commissioner. They might then tell you to send your own nudes to yourself on Messenger.

send your own nudes to yourself
Facebook is coming up with a method to prevent revenge porn if you send your own nudes to yourself. Pixabay.

“It would be like sending yourself your image in email, but obviously this is a much safer, secure end-to-end way of sending the image without sending it through the ether,” e-Safety Commissioner Julie Inman Grant told ABC.

Once the image is sent via Messenger, Facebook would use technology to “hash” it, which means creating a digital fingerprint or link.

“They’re not storing the image, they’re storing the link and using artificial intelligence and other photo-matching technologies,” Grant said.

“So if somebody tried to upload that same image, which would have the same digital footprint or hash value, it will be prevented from being uploaded,” she explained.

Australia is one of four countries taking part in the “industry-first” pilot which uses “cutting-edge technology” to prevent the re-sharing on images on its platforms, Facebook’s Head of Global Safety Antigone Davis was quoted as saying.

“The safety and wellbeing of the Facebook community is our top priority,” Davis said. (IANS)