Wednesday April 24, 2019
Home Science & Technology Facebook uses...

Facebook uses Artificial Intelligence (AI) to fight Terrorism

0
//
Facebook
A man is silhouetted against a video screen with an Facebook logo as he poses with an Samsung S4 smartphone in this photo illustration taken in the central Bosnian town of Zenica, Aug. 14, 2013. The company said it is using artificial intelligence to remove terrorism-related posts. VOA
  • Facebook said AI has helped identify and remove fake accounts made by repeat offenders
  • The company has been under increasing pressure from governments around the world to do a better job of removing posts made by terrorists

June 18, 2017: Facebook has revealed it is using artificial intelligence in its ongoing fight to prevent terrorist propaganda from being disseminated on its platform.

“We want to find terrorist content immediately, before people in our community have seen it,” read the message posted Thursday. “Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook.”

The company has been under increasing pressure from governments around the world to do a better job of removing posts made by terrorists

Some of the roles AI plays involve “image matching” to see if an uploaded image matches something previously removed because of its terrorist content.

“Language understanding,” the company says, will allow it to “understand text that might be advocating for terrorism.”

AI, Facebook says, is also useful for identifying and removing “terrorist clusters.”

“We know from studies of terrorists that they tend to radicalize and operate in clusters,” according to the blog post. “This offline trend is reflected online as well. So when we identify pages, groups, posts or profiles as supporting terrorism, we also use algorithms to “fan out” to try to identify related material that may also support terrorism.”

Facebook said AI has helped identify and remove fake accounts made by “repeat offenders.” It says it has already reduced the time fake accounts are active.

However, the company does not rely completely on AI.

“AI can’t catch everything,” it said. “Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context.

“A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story. Some of the most effective criticisms of brutal groups like ISIS utilize the group’s own propaganda against it. To understand more nuanced cases, we need human expertise.” (VOA)

Next Story

New Zealand, France Plan in Effort to Stop Promotion of Terrorism, Violent Extremism on Social Media

A lone gunman killed 50 people at two mosques in Christchurch on March 15, while livestreaming the massacre on Facebook

0
facebook, christchurch attack, new zealand
FILE - The Facebook logo is seen on a shop window in Malaga, Spain, June 4, 2018. (VOA)

In the wake of the Christchurch attack, New Zealand said on Wednesday that it would work with France in an effort to stop social media from being used to promote terrorism and violent extremism.

Prime Minister Jacinda Ardern said in a statement that she will co-chair a meeting with French President Emmanuel Macron in Paris on May 15 that will seek to have world leaders and CEOs of tech companies agree to a pledge, called the Christchurch Call, to eliminate terrorist and violent extremist content online.

A lone gunman killed 50 people at two mosques in Christchurch on March 15, while livestreaming the massacre on Facebook.

Brenton Tarrant, 28, a suspected white supremacist, has been charged with 50 counts of murder for the mass shooting.

christchurch attack, new zealand, facebook
Students light candles as they gather for a vigil to commemorate victims of Friday’s shooting, outside the Al Noor mosque in Christchurch, New Zealand, March 18, 2019. (VOA)

“It’s critical that technology platforms like Facebook are not perverted as a tool for terrorism, and instead become part of a global solution to countering extremism,” Ardern said in the statement.

“This meeting presents an opportunity for an act of unity between governments and the tech companies,” she added.

The meeting will be held alongside the Tech for Humanity meeting of G7 digital ministers, of which France is the chair, and France’s separate Tech for Good summit, both on 15 May, the statement said.

Ardern said at a press conference later on Wednesday that she has spoken with executives from a number of tech firms including Facebook, Twitter, Microsoft, Google and few other companies.

“The response I’ve received has been positive. No tech company, just like no government, would like to see violent extremism and terrorism online,” Ardern said at the media briefing, adding that she had also spoken with Facebook’s Mark Zuckerberg directly on the topic.

christchurch attack, facebook, new zealand
Facebook, the world’s largest social network with 2.7 billion users, has faced criticism since the Christchurch attack that it failed to tackle extremism. VOA

A Facebook spokesman said the company looks forward to collaborating with government, industry and safety experts on a clear framework of rules.

“We’re evaluating how we can best support this effort and who among top Facebook executives will attend,” the spokesman said in a statement sent by email. Facebook, the world’s largest social network with 2.7 billion users, has faced criticism since the Christchurch attack that it failed to tackle extremism.

ALSO READ: Social Media Giant Facebook Announces First Browser API for Google Chrome

One of the main groups representing Muslims in France has said it was suing Facebook and YouTube, a unit of Alphabet’s Google, accusing them of inciting violence by allowing the streaming of the Christchurch massacre on their platforms.

Facebook Chief Operating Officer Sheryl Sandberg said last month that the company was looking to place restrictions on who can go live on its platform based on certain criteria. (VOA)