Facebook says it has taken down four pages belonging to conspiracy theorist Alex Jones for violating its hate speech and bullying policies.
The social media giant said in a statement Monday that it also blocked Jones’ account for 30 days because he repeatedly posted content that broke its rules.
The company said it “unpublished” the four pages after receiving reports that they contained content “glorifying violence” and used “dehumanizing language” to describe Muslims, immigrants and transgender people.
Facebook is the latest tech company to take action against Jones, who has been facing a growing backlash on social media.
Twitter users are blocking companies like Pepsi, Nike and Uber on Twitter to pressure the social media firm to permanently ban American broadcaster Alex Jones for what they say are his abusive tweets and offensive speech.
Meanwhile, Twitter reportedly is facing a shutdown in Pakistan because of a government request to block what it deems objectionable content.
The moves come as U.S. internet companies take a harder look at their policies that have promoted free expression around the world. The companies have a mostly hands-off policy when it comes to curtailing speech, except when it comes to inciting violence and pornography. But that largely permissive approach is getting a new look.
Twitter and Alex Jones
Twitter recently slapped a seven-day ban on conservative American radio host Jones for violating its policy on abusive speech, when he appeared to call for violence against the media, something he denies.
On his show this week, Jones noted that Twitter had removed his videos.
“They took me down,” he said. “Because they will not let me have a voice.”
Earlier this month, Apple, Spotify, Facebook, YouTube and other social media limited Jones and his InfoWars media company from their sites. But InfoWars’ live-streaming app can still be found at Google and Apple’s app stores. The on-air personality has put forth conspiracy theories calling some U.S. mass shootings hoaxes.
No more hands off
Internet firms are moving away from the long-held position that they didn’t want to monitor expression on their sites too closely, Irina Raicu, director of the Internet Ethics Program at Santa Clara University, said.
“The companies are stuck in the middle and no longer trying to avoid responsibility in a way that I think they were even a few years ago when they were saying we are just neutral platforms,” Raicu said. “They are increasingly taking a more open role in determining what content moderation looks like.”
It’s not just in the U.S. where the internet companies are having to make hard decisions about speech. The firms are also grappling with extreme speech in other languages.
Comments on Facebook have been linked to violence in places like Myanmar and India. A recent article by the Reuters news agency reports that negative messages about Myanmar’s Rohingya minority group were throughout its site.
Some call on social media companies to do more to target and take down hate messages before they lead to violence.
“If Facebook is bent on removing abusive words and nudity, they should be focused on removing these words as well,” said Abhinay Korukonda, a student from Mumbai, India, who is studying at the University of California, Berkeley. “This comes under special kinds of abusive terms. They should take an action. They should definitely remove these.”
Ming Hsu studies decision-making at UC Berkeley’s Haas School of Business. He is researching how to come up with objective standards for determining whether certain speech could lead to real-world dangers against people both in the U.S. and across the globe.
“We don’t have actionable standards for policymakers or for companies or even lay people to say, ‘This is crossing the boundaries, this is way past the boundaries and this is sort of OK,’” Hsu said.
Those calls are even harder when looking at speech in other languages and cultures, he added.
“We don’t really have any intuition for who’s right, who is wrong and who is being discriminated against,” Hsu said. “And that gets back to relying on common sense and how fragile that is.”
Tech companies are known for constantly tweaking their products and software. Now it seems they are taking the same approach with speech as they draw the line between free expression and reducing harm. (VOA)