Monday September 23, 2019
Home Lead Story Know How U.S....

Know How U.S. is Keeping A Check on Social Media’s Spread of Fake News

“What have you done to ensure that all your folks out there globally know the dog whistles, know the keywords, the phrasing, the things that people respond to, so we can be more responsive and be proactive in blocking some of this language?”

0
//
facebook, us-mexico border
"People cannot use our fundraising tools for activities involving weapons," said a Facebook spokesperson in a statement. VOA

As Notre Dame Cathedral burned, a posting on Facebook circulated – a grainy video of what appeared to be a man in traditional Muslim garb up in the cathedral.

Fact-checkers worldwide jumped into action and pointed out the video and postings were fake and the posts never went viral.

But this week, the Sri Lanka government temporarily shut down Facebook and other sites to stop the spread of misinformation in the wake of the Easter Sunday bombings in the country that killed more than 250 people. Last year, misinformation on Facebook was blamed for contributing to riots in the country.

Facebook, Twitter, YouTube and others are increasingly being held responsible for the content on their sites as the world tries to grapple in real time with events as they unfold. From lawmakers to the public, there has been a rising cry for the sites to do more to combat misinformation particularly if it targets certain groups.

Shift in sense of responsibility

For years, some critics of social media companies, such as Twitter, YouTube and Facebook, have accused them of having done the minimum to monitor and stamp out misinformation on their platforms. After all, the internet platforms are generally not legally responsible for the content there, thanks to a 1996 U.S. federal law that says they are not publishers. This law has been held up as a key protection for free expression online.

And, that legal protection has been key to the internet firms’ explosive growth. But there is a growing consensus that companies are ethically responsible for misleading content, particularly if the content has an audience and is being used to target certain groups.

An Indian man browses through the twitter account of Alt News, a fact-checking website, in New Delhi, India, April 2, 2019.
An Indian man browses through the twitter account of Alt News, a fact-checking website, in New Delhi, India, April 2, 2019. VOA

Tuning into dog whistles

At a recent House Judiciary Committee hearing on white supremacy and hate crimes, Congresswoman Sylvia Garcia, a Texas Democrat, questioned representatives from Facebook and Google about their policies.

“What have you done to ensure that all your folks out there globally know the dog whistles, know the keywords, the phrasing, the things that people respond to, so we can be more responsive and be proactive in blocking some of this language?” Garcia asked.

Each company takes a different approach.

Facebook, which perhaps has had the most public reckoning over fake news, won’t say it’s a media company. But it has taken partial responsibility about the content on its site, said Daniel Funke, a reporter at the International Fact-Checking Network at the Poynter Institute.

The social networking giant uses a combination of technology and humans to address false posts and messages that appear to target groups. It is collaborating with outside fact-checkers to weed out objectionable content, and has hired thousands to grapple with content issues on its site.

Swamp of misinformation

Twitter has targeted bots, automatic accounts that spread falsehoods. But fake news often is born on Twitter and jumps to Facebook.

“They’ve done literally nothing to fight misinformation,” Funke said.

YouTube, owned by Google, has altered its algorithms to make it harder to find problematic videos, or embed code to make sure relevant factual content comes up higher in the search. YouTube is “such a swamp of misinformation just because there is so much there, and it lives on beyond the moment,” Funke said.

Other platforms of concern are Instagram and WhatsApp, both owned by Facebook.

Some say what the internet companies have done so far is not enough.

“To use a metaphor that’s often used in boxing, truth is against the ropes. It is getting pummeled,” said Sam Wineburg, an education professor at Stanford University.

What’s needed, he said, is for the companies to take full responsibility: “This is a mess we’ve created and we are going to devote resources that will lower the profits to shareholders, because it will require a deeper investment in our own company.”

FILE - An activist wearing a Facebook CEO Mark Zuckerberg mask stands outside Portcullis House in Westminster as an international committee of parliamentarians met for a hearing on the impact of disinformation on democracy in London.
An activist wearing a Facebook CEO Mark Zuckerberg mask stands outside Portcullis House in Westminster as an international committee of parliamentarians met for a hearing on the impact of disinformation on democracy in London. VOA

Fact-checking and artificial intelligence

One of the fact-checking organizations that Facebook works with is FactCheck.org. It receives misinformation posts from Facebook and others. Its reporters check out the stories then report on their own site whether the information is true or false. That information goes back to Facebook as well.

Facebook is “then able to create a database now of bad actors, and they can start taking action against them,” said Eugene Kiely, director of FactCheck.org. Facebook has said it will make it harder to find posts by people or groups that continually post misinformation.

The groups will see less financial incentives, Kiely points out. “They’ll get less clicks and less advertising.”

Funke predicts companies will use technology to semi-automate fact-checking, making it better, faster and able to match the scale of misinformation.

Also Read: Now Google Assistant Will Help You Making Your Baby Sleep

That will cost money of course.

It also could slow the internet companies’ growth.

Does being more responsible mean making less money? Social media companies are likely to find out. (VOA)

Next Story

Artificial Intelligence Creating New Possibilities for Personalisation This Year

Tech companies today are also attempting to bridge the gap between academia and the career market

0

Artificial Intelligence (AI) and cross-industry collaborations are creating new avenues for data collection and offering personalised services to users this year, according to a report.

Among other technology trends that are picking up this year are the convergence of the smart home and healthcare, autonomous vehicles coming for last-mile delivery and data becoming a hot-button geopolitical issue, according to the report titled “14 Trends Shaping Tech” from CB Insights.

“As a more tech-savvy generation ages up, we’ll see the smart home begin acting as a kind of in-home health aide, monitoring senior citizens’ health and well being. We’ll see logistics players experiment with finally moving beyond a human driver,” said the report.

“And we’ll see cross-industry collaborations, whether via ancestry-informed Spotify playlists or limited edition Fortnite game skins,” it added.

In September 2018, Spotify partnered with Ancestry.com to utilise DNA data to create unique playlists for individuals.

Playlists reflect music linked to different ethnicities and regions. A person with ancestral roots in Bengaluru, for example, might see Carnatic violinists and Kannada film songs on their playlists.

DNA data is also informing how we eat. GenoPalate, for example, collects DNA info through saliva samples and analyses physiological components like an individual’s ability to absorb certain vitamins or how fast they can metabolize nutrients.

From there, it matches this information to nutrition analyses that it has conducted on a wide range of food and suggests a personalised diet. It also sells its own meal kits that use this information to map out menus.

artificial intelligence, nobel prize
“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society.” VOA

“We’ll also see technology brands expand beyond their core products and turn themselves into a lifestyle,” said the report.

For example, as electric vehicle users need to wait for their batteries to charge for anywhere from 30 minutes to two hours, the makers of these vehicles are trying to turn this idle time into an asset.

China’s NioHouse couples charging stations with a host of activities. At the NioHouse, a user can visit the library, drop children off at daycare, co-work, and even visit a nap pod to rest while charging.

Nio has also partnered with fashion designer Hussein Chalayan to launch and sell a fashion line, Nio Extreme.

Also Read- YouTube Working to Overhaul its Verification Programme

Tech companies today are also attempting to bridge the gap between academia and the career market.

Companies like the Lambda School and Flatiron School offer courses to train students on exactly the skills they will need to get a job, said the report.

These apprenticeships mostly focus on tech skills like computer science and coding. Training comes with the explicit goal of employment and students only need to pay their tuition once they have landed a job that pays them above a certain range.

Investors are also betting on the rise of digital goods. While these goods cannot be owned in the physical world, they come with clout, and offer personalisation and in-game experiences to otherwise one-size-fits-all characters, the research showed. (IANS)