Facebook said Tuesday it had been unable to determine who was behind dozens of fake accounts it took down shortly before the 2018 U.S. midterm elections.
“Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior,” Nathaniel Gleicher, head of cybersecurity policy, wrote on the company’s blog.
At least one of the Instagram accounts had well over a million followers, according to Facebook.
A website that said it represented the Russian state-sponsored Internet Research Agency claimed responsibility for the accounts last week, but Facebook said it did not have enough information to connect the agency that has been called a troll farm.
“As multiple independent experts have pointed out, trolls have an incentive to claim that their activities are more widespread and influential than may be the case,” Gleicher wrote.
Sample images provided by Facebook showed posts on a wide range of issues. Some advocated on behalf of social issues such as women’s rights and LGBT pride, while others appeared to be conservative users voicing support for President Donald Trump.
The viewpoints on display potentially fall in line with a Russian tactic identified in other cases of falsified accounts. A recent analysis of millions of tweets by the Atlantic Council found that Russian trolls often pose as members on either side of contentious issues in order to maximize division in the United States. (VOA)
In the wake of the Christchurch attack, New Zealand said on Wednesday that it would work with France in an effort to stop social media from being used to promote terrorism and violent extremism.
Prime Minister Jacinda Ardern said in a statement that she will co-chair a meeting with French President Emmanuel Macron in Paris on May 15 that will seek to have world leaders and CEOs of tech companies agree to a pledge, called the Christchurch Call, to eliminate terrorist and violent extremist content online.
A lone gunman killed 50 people at two mosques in Christchurch on March 15, while livestreaming the massacre on Facebook.
Brenton Tarrant, 28, a suspected white supremacist, has been charged with 50 counts of murder for the mass shooting.
“It’s critical that technology platforms like Facebook are not perverted as a tool for terrorism, and instead become part of a global solution to countering extremism,” Ardern said in the statement.
“This meeting presents an opportunity for an act of unity between governments and the tech companies,” she added.
The meeting will be held alongside the Tech for Humanity meeting of G7 digital ministers, of which France is the chair, and France’s separate Tech for Good summit, both on 15 May, the statement said.
Ardern said at a press conference later on Wednesday that she has spoken with executives from a number of tech firms including Facebook, Twitter, Microsoft, Google and few other companies.
“The response I’ve received has been positive. No tech company, just like no government, would like to see violent extremism and terrorism online,” Ardern said at the media briefing, adding that she had also spoken with Facebook’s Mark Zuckerberg directly on the topic.
A Facebook spokesman said the company looks forward to collaborating with government, industry and safety experts on a clear framework of rules.
“We’re evaluating how we can best support this effort and who among top Facebook executives will attend,” the spokesman said in a statement sent by email. Facebook, the world’s largest social network with 2.7 billion users, has faced criticism since the Christchurch attack that it failed to tackle extremism.
One of the main groups representing Muslims in France has said it was suing Facebook and YouTube, a unit of Alphabet’s Google, accusing them of inciting violence by allowing the streaming of the Christchurch massacre on their platforms.
Facebook Chief Operating Officer Sheryl Sandberg said last month that the company was looking to place restrictions on who can go live on its platform based on certain criteria. (VOA)