Do you follow?: How technology can exacerbate ‘information disorder’

In an era of algorithmic feeds and generative-AI content, technology isn’t just a messenger, it can amplify information disorder
Representational image showing a MacBook and there is also a diary with a pen and a ceramic cup beside the MacBook
Technology’s role in the spread of low-credibility content and algorithmic filter bubbles shows that ‘following’ online may fuel, not fix, information disorder.Photo by Pixabay
Updated on

This story by Safa originally appeared on Global Voices on November 10, 2025.

Social media has been a key tool of information and connection for people who are part of traditionally marginalized communities. Young people access important communities they may not be able to access in real life, such as LGBTQ+ friendly spaces. In the words of one teen, “Throughout my entire life, I have been bullied relentlessly. However, when I’m online, I find that it is easier to make friends… […] Without it, I wouldn’t be here today.” But experts are saying that social media has been “both the best thing […] and it’s also the worst” to happen to the trans community, with hate speech and verbal abuse resulting in tragic real-life consequences. “Research to date suggests that social media experiences may be a double-edged sword for LGBTQ+ youth that can protect against or increase mental health and substance use risk.” 

In January 2025, Mark Zuckerberg announced that Meta (including Facebook and Instagram) would end their third-party fact-checking program in favor of the model of “community notes” on X (formerly Twitter). Meta’s decision included ending policies that protect LGBTQ+ users. Misinformation is an ongoing issue across social media platforms, reinforced and boosted by the design of the apps, with the most clicks and likes getting the most rewards, whether they be rewards of attention or money. Research found that “the 15% most habitual Facebook users were responsible for 37% of the false headlines shared in the study, suggesting that a relatively small number of people can have an outsized impact on the information ecosystem.”

Meta’s pledge to remove their third-party fact-checking program has raised alarm bells among journalists, human rights organizations, and researchers. The UN’s High Commissioner for Human Rights, Volker Türk, said in response: “Allowing hate speech and harmful content online has real world consequences.” Meta has been implicated in or accused of supercharging the genocide of the Rohingya in Myanmar, as well as fueling ethnic violence in KenyaEthiopia, and Nigeria, at least in part due to the rampant misinformation on its platform. 

“We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook … are affecting societies around the world,” said one leaked internal Facebook report from 2019. “We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.” The International Fact-Checking Network responded to the end of the nine-year fact-checking program in an open letter shortly after Zuckerberg’s 2025 announcement, stating that “the decision to end Meta’s third-party fact-checking program is a step backward for those who want to see an internet that prioritizes accurate and trustworthy information.”

Unverifiable posts, disordered feeds

The algorithms behind social media platforms control which information is prioritized, repeated, and recommended to people in their feeds and search results. But even with several reports, studies, and shifting user behaviors, the companies themselves have not done much to adapt their user interface designs to catch up to the more modern ways of interaction and facilitate meaningful user fact-checking.

Even when media outlets publish corrections to false information and any unsubstantiated claims they perpetuate, it isn’t enough to reverse the damage. As described by First Draft News: “It is very, very difficult to dislodge [misinformation] from your brain.” When false information is published online or in the news and begins circulating, even if it is removed within minutes or hours, the “damage is done,” so to speak. Corrections and clarifying statements rarely get as much attention as the original piece of false information, and even if they are seen, they may not be internalized.  

Algorithms are so prevalent that, at first glance, they may seem trivial, but they are actually deeply significant. Well-known cases like the father who found out his daughter was pregnant through what was essentially an algorithm, and another father whose Facebook Year in Review “celebrated” the death of his daughter, illustrate how the creators, developers, and designers of algorithmically curated content should be considerate of worst-case scenarios. Edge cases, although rare, are significant and warrant inspection and mitigation. 

Furthering audiences down the rabbit hole, there have been a multitude of reports and studies that have found how recommendation algorithms across social media can radicalize audiences based on the content they prioritize and serve. “Moral outrage, specifically, is probably the most powerful form of content online.” A 2021 study found that TikTok’s algorithm led viewers from transphobic videos to violent far-right content, including racist, misogynistic, and ableist messaging. “Our research suggests that transphobia can be a gateway prejudice, leading to further far-right radicalization.” YouTube was also once dubbed the “radicalization engine,” and still seems to be struggling with its recommendation algorithms, such as the more recent report of YouTube Kids sending young viewers down eating disorder rabbit holes. Ahead of German elections in 2025, researchers found that social media feeds across platforms, but especially on TikTok, skewed right-wing. 

An erosion of credibility

People are increasingly looking for their information in different ways, beyond traditional news media outlets. A 2019 report found that teens were getting most of their news from social media. A 2022 article explained how many teens are using TikTok more than Google to find information. That same year, a study explored how adults under 30 trust information from social media almost as much as national news outlets. A 2023 multi-country report found that fewer than half (40 percent) of total respondents “trust news most of the time.” Researchers warned the trajectory of information disorder could result in governments steadily taking more control of information, adding “access to highly concentrated tech stacks will become an even more critical component of soft power for major powers to cement their influence.” 

Indonesia’s 2024 elections saw the use of AI-generated digital avatars take center stage, especially in capturing the attention of young voters. Former candidate and now President Prabowo Subianto used a cute digital avatar created by generative AI across social media platforms, including TikTok, and was able to completely rebrand his public image and win the presidency, distracting from accusations of major human rights abuses against him. Generative AI, including chatbots like ChatGPT, is also a key player in information disorder because of how realistic and convincing the texts and images it produces. 

Even seemingly harmless content on spam pages like “Shrimp Jesus” can result in real-world consequences, such as the erosion of trust, falling for scams, and having one’s data breached by brokers who feed that information back into systems, fueling digital influence. Furthermore, the outputs of generative AI may be highly controlled. “Automated systems have enabled governments to conduct more precise and subtle forms of online censorship,” according to a 2023 Freedom House report. “Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern.”

As has been echoed time and again throughout this series, technology is neither good nor bad — it depends on the purpose for which it is used. “Technology inherits the politics of its authors, but almost all technology can be harnessed in ways that transcend these frameworks.” These various use cases and comparisons can be useful when discussing specific tools and methods, but only at a superficial level — for instance, regarding digital avatars which were mentioned in this piece. 

One key example comes from Venezuela, where the media landscape is rife with AI-generated pro-government messages and people working in journalism face threats of imprisonment. In response, journalists have utilised digital avatars to help protect their identities and maintain privacy. This is, indeed, a story of resilience, but it sits within a larger and more nefarious context of power and punishment. While any individual tool can reveal both benefits and drawbacks in its use cases, zooming out and looking at the bigger picture reveals power systems and structures that put people at risk and the trade-offs of technology are simply not symmetrical. 

Two truths can exist at the same time, and the fact that technology is used for harnessing strength and is used for harming and oppressing people is significant.

(SY)

Suggested Reading:

Representational image showing a MacBook and there is also a diary with a pen and a ceramic cup beside the MacBook
Harnessing Technology For Early Warnings On Earthquakes

Subscribe to our channels on YouTube and WhatsApp 

Related Stories

No stories found.
logo
NewsGram
www.newsgram.com