Thursday September 19, 2019
Home Lead Story Twitter Seeks...

Twitter Seeks Help of Academic Scholars to Improve Healthy Conversation on its Platform

They will be studying how people use Twitter, and how exposure to a variety of perspectives and backgrounds can decrease prejudice and discrimination

0
//
twitter Icon
Twitter gives more freedom to report spam, fake accounts. Pixabay

In line with its efforts to curb abuse and harassment of users on its platform, Twitter has now selected two research projects that aim to develop metrics to measure the “health” of public conversation.

“An update! We’ve selected 2 partners from 230 idea submissions. Our first goal is working to measure the ‘health’ of public conversation, and that measurement be open and defined by third parties (not by us),” Twitter CEO Jack Dorsey said in a tweet on Monday.

Earlier this year, the microblogging site said it would work to increase the collective health, openness, and civility of the dialogue on its service.

As part of these efforts, Twitter initiated a programme to suspend millions of fake accounts and in June announced the acquisition of Smyte, a San Francisco-based technology company that specialises in safety, spam, and security issues.

One of the two projects that the social network has now selected will be led by a political science professor at Leiden University in the Netherlands.

This project will develop two sets of metrics: how communities form around political discussions on Twitter, and the challenges that may arise as those discussions develop, Twitter said in a blog post.

The Leiden-led project will primarily focus on two key challenges: echo chambers and uncivil discourse.

Based on their past findings, echo chambers, which form when discussions involve only like-minded people and perspectives, can increase hostility and promote resentment towards those not having the same conversation.

Twitter
Twitter on a smartphone device. Pixabay

The project’s first set of metrics will assess the extent to which people acknowledge and engage with diverse viewpoints on Twitter.

The second set of metrics will focus on incivility and intolerance in Twitter conversations. The group has found that while incivility, which breaks norms of politeness, can be problematic, it can also serve important functions in political dialog.

In contrast, intolerant discourse — such as hate speech, racism, and xenophobia — is an inherent threat to democracy.

The team will therefore work on developing algorithms that distinguish between these two behaviours, Twitter said.

The other project will be led by scholars from the University of Oxford and the University of Amsterdam.

They will be studying how people use Twitter, and how exposure to a variety of perspectives and backgrounds can decrease prejudice and discrimination.

Also Read: Twitter May Block Account for Abusive Chats During Live Broadcasts

As part of the project, text classifiers for language commonly associated with positive sentiment, cooperative emotionality, and integrative complexity will be adapted to the structure of communication on Twitter, the microblogging site said.

“Ensuring we have thoughtful, comprehensive metrics to measure the health of public conversation on Twitter is crucial to guiding our work and making progress, and both of our partners will help us continue to think critically and inclusively so we can get this right,” Vijaya Gadde, Twitter’s Legal, Policy and Trust & Safety Lead and David Gasca, Director, Product Management, Health at Twitter wrote in the blog.

“We know this is a very ambitious task, and look forward to working with these two teams, challenging ourselves to better support a thriving, healthy public conversation,” they added. (IANS)

Next Story

Researchers Develop New Algorithm to Identify Cyber-bullies on Twitter

“In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,” said Blackburn

0
donald trump
FILE - A man reads tweets on his phone in front of a displayed Twitter logo. VOA

Researchers have developed machine learning algorithms which can identify bullies and aggressors on Twitter with 90 per cent accuracy.

For the study published in the journal Transactions on the Web, the research team analysed the behavioural patterns exhibited by abusive Twitter users and their differences from other users.

“We built crawlers — programs that collect data from Twitter via variety of mechanisms,” said study researcher Jeremy Blackburn from Binghamton University in the US.

“We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them,” Blackburn said.

The researchers then performed natural language processing and sentiment analysis on the tweets themselves, as well as a variety of social network analyses on the connections between users.

twitter, white swan, suicide, awareness
Twitter is a social media app that encourages short tweets and brief conversations. Pixabay

They developed algorithms to automatically classify two specific types of offensive online behaviour, i.e. cyber-bullying and cyber-aggression.

The algorithms were able to identify abusive users — who engage in harassing behaviour like those who send death threats or make racist remarks — on Twitter with 90 per cent accuracy.

Also Read: Facebook Announces Some New Features to its Video Capabilities

“In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,” said Blackburn.

“Our research indicates that machine learning can be used to automatically detect users that are cyber-bullies, and thus could help Twitter and other social media platforms remove problematic users,” Blackburn added. (IANS)