Twitter is getting tough on those who send abusive comments on its livestreaming platform Periscope as the microblogging site said it would suspend the accounts of such habitual offenders from August 10.
The company will enforce its Periscope Community Guidelines more aggressively by reviewing and suspending accounts of repeat offenders, TechCrunch reported on Saturday.
“As part of our ongoing effort to build a safer service, we are launching more aggressive enforcement of our guidelines related to chats sent during live broadcasts,” according to a Periscope blog post.
The Periscope Community Guidelines apply to all broadcasts on both Periscope and Twitter, the post added.
Currently, Periscope’s comment moderation policy involves group moderation to determine if someone can continue chatting.
So when someone reports an abusive comment, Periscope randomly selects a few other viewers to review the comment to determine if it is spam, abuse or appears alright.
“Starting on August 10, we will also review and suspend accounts for repeatedly sending chats that violate our guidelines. If you are in a broadcast and see a chat that may violate our guidelines, please report it,” the Periscope blog post said.
“We’re committed to making sure everyone feels safe, whether you’re broadcasting or just tuning in. Look out for more changes across policies, product, and enforcement as we continue to make both Periscope and Twitter safer,” it aded. (IANS)
Researchers have found that users who tweet on loneliness are much more likely to write about mental well-being issues and things like struggles with relationships, substance use and insomnia on Twitter.
By applying linguistic analytic models to tweets, researchers were able to gain an insight into the topics and themes that could be associated with loneliness.
“Loneliness can be a slow killer, as some of the medical problems associated with it can take decades to manifest,” said the study’s lead author Sharath Chandra Guntuku, from University of Pennsylvania in the US.
“If we are able to identify lonely individuals and intervene before the health conditions associated with the themes we found begin to unfold, we have a change to help those much earlier in their lives. This could be very powerful and have long-lasting effects on public health,” Guntuku said.
By determining typical themes and linguistic markers posted to social media that are associated with people who are lonely, the team has uncovered some of the ingredients necessary to construct a ‘loneliness’ prediction system.
As part of the study, published in the journal BMJ, researchers analysed public accounts from users based in Pennsylvania and found that 6,202 accounts used words such as ‘lonely’ or ‘alone’ more than five times between 2012 and 2016.
Comparing the entire Twitter timelines of these users to a matched group who did not have such language included their posts, the researchers showed that ‘lonely’ users tweeted nearly twice as much and were much more likely to do so at night.
When the tweets were analysed via several different linguistic analytic models, the users who posted about loneliness had an extremely high association with anger, depression and anxiety, when compared to the ‘non-lonely’ group.
Additionally, the lonely groups were significantly associated with tweeting about struggles with relationships (for example, using phrases like ‘want somebody’ or ‘no one to’) and substance use (‘smoke,’ ‘weed,’ and ‘drunk’)
“On Twitter, we found lonely users expressing a need for social support, and it appears that the use of expletives and the expression of anger is a sign of that being unfulfilled,” Guntuku said.
Users in the group that didn’t post about loneliness seemed to display some social connections, as they were found to be more likely to engage in conversations, especially by including others’ user names (using ‘@twitter_handle’) in their tweets.
In the future, the researchers hope to develop a better measure of the different dimensions of loneliness that online users are feeling and expressing. (IANS)