Friday February 21, 2020

Here’s how Cyberbullying Leads to Depression Among Youngsters

Online bullying more horrifying, leads to depression in youths

0
//
Online bullying depression
Cyberbullying amplifyies symptoms of depression and post-traumatic stress disorder in young people. Pixabay

As researchers have found that cyberbullying amplifyies symptoms of depression and post-traumatic stress disorder in young people, health experts here also stressed that in some cases it can be far more horrifying than physical bullying.

According to the experts, cyberbullying is when a child, teen or youngster becomes a target of actions by others — using computers, cellphones or other devices — that are intended to embarrass, humiliate, torment, threaten or harass.

It can start as early as age eight or nine, but the majority of cyberbullying cases take place in the teenage years, up to age 17.

The new study, published in the Journal of Clinical Psychiatry, addressed both the prevalence and factors related to cyberbullying in adolescent inpatients.

Online bullying depression
From setting beauty standards and norms to trolling every act has a significant effect on the psyche of internet users, especially on youth and children, it leads to stress and depression. Pixabay

“Even against a backdrop of emotional challenges in the kids we studied, we noted cyberbullying had an adverse impact. It’s real and should be assessed,” said study co-author Philip D. Harvey, Professor at University of Miami in the US.

According to the researchers, children with a history of being abused were found to be more likely to be cyberbullied.

The study of 50 adolescent psychiatric inpatients aged 13 to 17 examined the prevalence of cyberbullying and related it to social media usage, current levels of symptoms and histories of adverse early life experience.

Conducted from September 2016 to April 2017, the research team asked participants to complete two childhood trauma questionnaires and a cyberbullying questionnaire.

Twenty per cent of participants reported that they had been cyberbullied within the last two months before their admission.

According to the researchers, half of the participants were bullied by text messages and half on Facebook.

Transmitted pictures or videos, Instagram, instant messages and chat rooms were other cyberbullying vehicles, the study said.

Online bullying depression
Children with a history of being abused suffer from depression. Pixabay

Those who were bullied had significantly higher severity of post-traumatic stress disorder (PTSD), depression, anger, and fantasy dissociation than those who were not bullied.

According to findings, participants who reported being cyberbullied also reported significantly higher levels of lifetime emotional abuse on the study’s Childhood Trauma Questionnaire than those who were not bullied.

The internet not only covers the huge part of our lives nowadays, rather it actually dominates today’s generations’ lives, according to the expert.

“From setting beauty standards and norms to trolling every act has a significant effect on the psyche of internet users, especially on youth and children, it leads to stress and depression as well,” Mrinmay Kumar Das, Senior Consultant, Department of Behavioural Sciences, Jaypee Hospital in Noida, told IANS.

Also Read- Full Vaccination of Children Reduces the Risk of Hospitalisation: Study

To reduce the risk of falling in this trap, Das suggested: “Keep an eye on the people you interact with online, keep your personal information or private details safe. Also keep in mind that your children who apparently act normal may also be dealing with cyber bullying.”

“Hence keep communicating with your children, rather than scolding them and forcefully limiting their internet use, support them to come out of this depressing phase, encourage them to indulge in other activities like games, music, etc,” Das added. (IANS)

Next Story

Find out How Cyborgs, Trolls and Bots Can Fill the Internet with Misinformation

Cyborgs, Trolls and Bots: A Guide to Online Misinformation

0
Misinformation
Misinformation is defined as any false information, regardless of intent, including honest mistakes or misunderstandings of the facts. Pixabay

Cyborgs, trolls and bots can fill the internet with lies and half-truths. Understanding them is key to learning how misinformation spreads online.

As the 2016 election showed, social media is increasingly used to amplify false claims and divide Americans over hot-button issues including race and immigration. Researchers who study misinformation predict it will get worse leading up to this year’s presidential vote. Here’s a guide to understanding the problem:

MISINFORMATION VS. DISINFORMATION

Political misinformation has been around since before the printing press, but the internet has allowed falsehoods, conspiracy theories and exaggerations to spread faster and farther than ever.

Misinformation is defined as any false information, regardless of intent, including honest mistakes or misunderstandings of the facts. Disinformation, on the other hand, typically refers to misinformation created and spread intentionally as a way to confuse or mislead.

Misinformation
An illustration of hacking, speading misinformation and cyberattack. VOA

Misinformation and disinformation can appear in political ads or social media posts. They can include fake news stories or doctored videos. One egregious example of disinformation from last year was a video of House Speaker Nancy Pelosi that was slowed down to make her sound as if she were slurring her words.

Research indicates that false claims spread more easily than accurate ones, possibly because they are crafted to grab attention.

Scientists at the Massachusetts Institute of Technology analyzed more than 126,000 stories, some true and some false, that were tweeted millions of times from 2006 through the end of 2016. Online misinformation has been blamed for deepening America’s political polarization and contributing to distrust in government. The risks were highlighted in 2016 when Russian trolls created fake accounts to spread and amplify social media posts about controversial issues.

WAR OF THE BOTS AND CYBORGS

The disposable foot soldiers in this digital conflict are bots. In the social media context, these autonomous programs can run accounts to spread content without human involvement.

Many are harmless, tweeting out random poems or pet photos. But others are up to no good and designed to resemble actual users.

One study by researchers at the University of Southern California analyzed election-related tweets sent in September and October 2016 and found that 1 in 5 were sent by a bot. The Pew Research Center concluded in a 2018 study that accounts suspected of being bots are responsible for as many as two-thirds of all tweets that link to popular websites.

While flesh-and-blood Twitter users will often post a few times a day, about a variety of subjects, the most obvious bots will tweet hundreds of times a day, day and night, and often only on a specific topic. They are more likely to repost content rather than create something original.

And then there’s the cyborg, a kind of hybrid account that combines a bot’s tirelessness with human subtlety. Cyborg accounts are those in which a human periodically takes over a bot account to respond to other users and to post original content. They are more expensive and time consuming to operate, but they don’t give themselves away as robots.

“You can get a lot from a bot, but maybe it’s not the best quality,” said Emilio Ferrara, a data science researcher at the University of Southern California who co-wrote the study on Twitter bots. “The problem with cyborgs is they are much harder to catch and detect.”

SPOT THE BOTS

Bots can be hard to spot, even for the best researchers.

“We have 12 ways that we spot a bot, and if we hit seven or eight of them we have pretty high confidence,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, a Washington, D.C.-based organization that studies connections between social media, cybersecurity and government.

Nonetheless, Brookie recalled the case of a Twitter account from Brazil that was posting almost constantly — sometimes once per minute — and displayed other bot-like characteristics. And yet, “It was a little grandma, who said, ‘This is me!’”

Their prevalence and the difficulty of identifying them has made bots into a kind of digital bogeyman and transformed the term into an insult, used to dismiss other social media users with different opinions.

Michael Watsey, a 43-year-old New Jersey man who often tweets his support for President Donald Trump, said he has been repeatedly called a Russian bot by people he argues with online. The accusations prompted Twitter to temporarily suspend his account more than once, forcing him to verify he is a human.

“All I’m trying to do is uses my First Amendment right to free speech,” he said. “It’s crazy that it’s come to this.”

TROLLS AND SOCK PUPPETS

The word troll once referred to beasts of Scandinavian mythology who hid under bridges and attacked travelers. Now it also refers to people who post online to provoke others, sometimes for their own amusement and sometimes as part of a coordinated campaign.

Sock puppets are another oddly named denizen of social media, in this case a type of imposter account. While some users may use anonymous accounts simply to avoid identifying themselves, sock-puppet accounts are used by the owner to attack their critics or praise themselves.

Misinformation
Misinformation and disinformation can appear in political ads or social media posts. Pixabay

FAKED VIDEOS: DEEP, CHEAP AND SHALLOW

Deepfakes are videos that have been digitally created with artificial intelligence or machine learning to make it appear something happened that did not. They are seen as an emerging threat, as improvements in video editing software make it possible for tricksters to create increasingly realistic footage of, say, former President Barack Obama delivering a speech he never made, in a setting he never visited. They are expensive and difficult to create — especially in a convincing way.

Facebook announced last month that it would ban deepfake videos — with exceptions for satire. Beginning in March, Twitter will prohibit doctored videos, photography and audio recordings “likely to cause harm.”

By contrast, shallowfakes, cheapfakes or dumbfakes are videos that have been doctored using more basic techniques, such as slowing down or speeding up footage or cutting it.

Because they’re easy and inexpensive to make, cheapfakes can be every bit as dangerous as their fancier cousin, the deepfake.

“Deepfakes are getting more realistic and easier to do,” said John Pavlik, a journalism professor at Rutgers University who studies how technology and the internet are changing communication habits. “But you don’t have to have special software to make these simpler ones.”

Also Read- Travel Destinations in India for Solo Women Travellers

Researchers who study Americans’ changing media habits recommend that people turn to a variety of sources and perspectives for their news, use critical thinking when evaluating information on social media, and think twice about reposting viral claims. Otherwise, they say, misinformation will continue to flow, and users will continue to spread it.

“The only solution,” Ferrara said, “is education.” (VOA)