Wednesday September 18, 2019
Home Lead Story Researchers C...

Researchers Claim, People Believe Human Generated Profiles More Suitable Than AI

"The more participants believed a profile was AI-generated, the less they tended to trust the host, even though the profiles they rated were written by the actual hosts"

0
//
AI
As AI becomes more commonplace and powerful, foundational guidelines, ethics and practice become vital. Pixabay

People trust human-generated profiles more than artificial intelligence-generated profiles, particularly in online marketplaces, reveals a study in which researchers sought to explore whether users trust algorithmically optimised or generated representations.

The research team conducted three experiments, particularly in online marketplaces, enlisting hundreds of participants on Amazon Mechanical Turk to evaluate real, human-generated Airbnb profiles.

When researchers informed them that they were viewing either all human-generated or all AI-generated profiles, participants didn’t seem to trust one more than the other. They rated the human- and AI-generated profiles about the same.

AI
“We’re beginning to see the first instances of artificial intelligence operating as a mediator between humans, but it’s a question of: ‘Do people want that?” Pixabay

That changed when participants were informed they were viewing a mixed set of profiles. Left to decide whether the profiles they read were written by a human or an algorithm, users distrusted the ones they believed to be machine-generated.

“Participants were looking for cues that felt mechanical versus language that felt more human and emotional,” said Maurice Jakesch, a doctoral student in information science at Cornell Tech in America.

“The more participants believed a profile was AI-generated, the less they tended to trust the host, even though the profiles they rated were written by the actual hosts,” said a researcher.

“We’re beginning to see the first instances of artificial intelligence operating as a mediator between humans, but it’s a question of: ‘Do people want that?”

AI
When researchers informed them that they were viewing either all human-generated or all AI-generated profiles, participants didn’t seem to trust one more than the other. They rated the human- and AI-generated profiles about the same. Pixabay

The research team from Cornell University and Stanford University found that if everyone uses algorithmically-generated profiles, users trust them. But if only some hosts choose to delegate writing responsibilities to artificial intelligence, they are likely to be distrusted.

Also Read: New Study Reveals, Duration of Over-Feeding Are Directed at Increasing Glucose Disposal

As AI becomes more commonplace and powerful, foundational guidelines, ethics and practice become vital.

The study also suggests there are ways to design AI communication tools that improve trust for human users. “Design and policy guidelines and norms for using AI-mediated communication is worth exploring now”, said Jakesch. (IANS)

Next Story

Fake Accounts On Social Media Now Able To Copy Human Behaviour

Fake accounts enabled by Artificial Intelligence (AI) on social media have evolved and are now able to copy human behaviour

0
fake, media, behaviour, artificial intelligence
Social Media Icons. VOA

Researchers, including one of Indian-origin, have found that bots or fake accounts enabled by Artificial Intelligence (AI) on social media have evolved and are now able to copy human behaviour to avoid detection.

For the study published in the journal First Monday, the research team from the University of Southern California examines bot behaviour during the 2018 US presidential elections compared to bot behaviour during the 2016 US elections.

“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content,” said study lead author Emilio Ferrara.

fake, media, behaviour, artificial intelligence
Artificial Intelligence (AI) on social media have evolved and are now able to copy human behaviour to avoid detection. Pixabay

The researchers studied almost 250,000 social media active users who discussed the US elections both in 2016 and 2018 and detected over 30,000 bots.

They found that bots in 2016 were primarily focused on retweets and high volumes of tweets around the same message.

However, as human social activity online has evolved, so have bots. In the 2018 election season, just as humans were less likely to retweet as much as they did in 2016, bots were less likely to share the same messages in high volume.

Bots, the researchers discovered, were more likely to employ a multi-bot approach as if to mimic authentic human engagement around an idea.

ALSO READ: How E-Learning could Help You in A Career Change

Also, during the 2018 elections, as humans were much more likely to try to engage through replies, bots tried to establish the voice and add to the dialogue and engage through the use of polls, a strategy typical of reputable news agencies and pollsters, possibly aiming at lending legitimacy to these accounts.

In one example, a bot account posted an online Twitter poll asking if federal elections should require voters to show ID at the polls. It then asked Twitter users to vote and retweet.

“We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences,” Ferrara said. (IANS)