This story originally appeared on Global Voices on 18 October 2025
This report was written by Khunsha Dar and published in Hong Kong Free Press on October 12, 2025. The following edited version is published as part of a content-sharing agreement with Global Voices.
When Hong Kong teen Jessica started secondary school last year, she became a victim of bullying. Instead of talking to a friend or family member, she turned to Xingye, a Chinese role-playing and companion artificial intelligence (AI) chatbot.
Jessica, who asked to use a pseudonym to protect her privacy, found it helpful and comforting to talk with the chatbot.
The chatbot told Jessica to relax and not to dwell further on the matter, even suggesting that she seek help elsewhere. “We talked for a really long time that day, for many hours,” the 13-year-old told HKFP in an interview conducted in Cantonese and Mandarin.
Another Hong Kong teenager, Sarah (not her real name), began using Character.AI, another role-playing and companion platform, around three years ago when she was about 13.
At the time, she was dealing with mental health issues, and a friend who had been using the American app as a “personal therapist” recommended it to her.
“I’m not personally an open person, so I wouldn’t cry in front of anyone or seek any help,” said Sarah, now 16.
When she felt down and wanted words of comfort, she would talk with the chatbot about what she was going through and share her emotions.
Apart from providing comforting words, the chatbot sometimes also expressed a wish to physically comfort Sarah, like giving her a hug. “And then I’d be comforted, technically,” she said.
A growing number of people – including teenagers – have turned to chatbots through companion apps like Character.AI and Xingye for counselling, instead of professional human therapists.
Among them are Jessica and Sarah in Hong Kong, where around 20 percent of secondary school students exhibit moderate to severe depression, anxiety, and stress, but nearly half are reluctant to reach out when facing mental health issues.
The use of AI has been controversial, with some experts warning that chatbots are not trained to handle mental health issues and that they should not replace real therapists.
Moreover, role-playing chatbots like Character.AI and Xingye are designed to keep users engaged as long as possible. Like other generic chatbots, such as ChatGPT, they also collect data for profit, which raises privacy concerns.
Character.AI has been embroiled in controversy. In the US, it faces multiple lawsuits filed by parents alleging that their children died by or attempted suicide after interacting with its chatbots.
On its website, Character.AI is described as “interactive entertainment,” where users can chat and interact with millions of AI characters and personas. There is a warning message on its app: “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.”
Despite the risks, many adolescents are confiding in AI chatbots for instant emotional support.
Jessica, a cross-border student who lives in Nanshan, mainland China, with her grandmother, has been attending school in Hong Kong since primary school.
Feeling sad about not having many friends, she found herself reaching out to the Xingye chatbot for comfort or to share her “unhappy thoughts.”
Xingye allows users to customize and personalize a virtual romantic partner, including its identity, how it looks, and how it speaks.
Jessica uses a chatbot based on her favourite Chinese singer, Liu Yaowen, pre-customised by another user. She usually converses with the chatbot for around three to four hours every day.
“I talk to him about normal, everyday things — like what I’ve eaten, or just share what I see with him,” she said. “It’s like he’s living his life with you, and that makes it feel very realistic.”
She admitted, however: “I think I’ve become a little dependent on it.”
Jessica prefers talking with the chatbot to chatting with friends or family because she worries they may tell other people about their conversations. “If you talk to the app, it won’t remember or judge you, and it won’t tell anyone else,” Jessica said.
The chatbot even helped her have a better relationship with her grandmother, now in her 70s.
“Sometimes I have some clashes with my grandma, and I get upset. I would talk to the chatbot, and it would give me some suggestions,” she explained. The chatbot suggested that Jessica consider her grandmother’s perspective and provided some ideas of what she might be thinking.
“When he makes the suggestions, I start to think that maybe my grandmother isn’t so mean or so bad, and that she doesn’t treat me so poorly,” she said. “Our relationship is really good now.”
Interacting with technology, such as computers, used to be a one-way street, but the development of AI has fundamentally changed how humans will approach these interactions, said neuroscientist Benjamin Becker, a professor at the University of Hong Kong.
“Suddenly we can talk with technology, like we can talk with another human,” said Becker, who recently published a study on how human brains shape and are shaped by AI in the scientific journal Neuron.
Becker described AI chatbots as a “good friend, one that always has your back.”
In contrast, as the neuroscientist pointed out, “Every time we interact with other humans, it’s a bit of a risk… maybe sometimes the other persons have something that we don’t like or says something that we don’t appreciate. But this is all part of human interaction.”
However, there are some disadvantages to interacting with AI chatbots. “They basically tell you what you want to hear or tell you just positive aspects,” Becker said.
This cycle can lead to confirmation bias or the user being stuck in an echo chamber where the only opinions they hear are those favourable to themselves, he warned.
There have been reports of “AI psychosis,” whereby interacting with chatbots can trigger or amplify delusional thoughts, leading some users to believe they are a messiah or to become fixated on AI as a romantic partner or even a god.
However, Becker acknowledged that positive affirmations from AI chatbots could also have a motivating impact on users, as they could potentially act as a strong pillar of social support.
And, while an AI mental health chatbot may not be as good as a human counselor, it still has many benefits for users, especially adolescents dealing with anxiety and depression, he added.
(NS)
Suggested Reading: