Monday March 25, 2019
Home Lead Story Facebook Set ...

Facebook Set up a War Room to Fight Election Interference

With the new ad architecture in place, people would be able to see who paid for a particular political ad

0
//
Facebook
Facebook releases Messenger redesign on Android, iOS. Pixabay

In line with its efforts to prevent misuse of its platform during elections, Facebook has set up a War Room to reduce the spread of potentially harmful content.

Facebook faced flak for not doing enough to prevent spread of misinformation by Russia-linked accounts during the 2016 US presidential election. The social networking giant has rolled out several initiatives to fight fake news and bring more transparency and accountability in its advertising since then.

The launch of the first War Room at its headquarters in Menlo Park, California, is part of the social network’s new initiatives to fight election interference on its platform.

Although Facebook opened the doors of the War Room ahead of the general elections in Brazil and mid-term elections in the US, it revealed the details only this week.

The goal behind setting up the War Room was to get the right subject-matter experts from across the company in one place so they can address potential problems identified by its technology in real time and respond quickly.

Facebook
Facebook, social media. Pixabay

“The War Room has over two dozen experts from across the company – including from our threat intelligence, data science, software engineering, research, community operations and legal teams,” Samidh Chakrabarti, Facebook’s Director of Product Management, Civic Engagement, said in a statement on Thursday.

“These employees represent and are supported by the more than 20,000 people working on safety and security across Facebook,” Chakrabarti added.

Facebook said its dashboards offer real-time monitoring on key elections issues, such as efforts to prevent people from voting, increases in spam, potential foreign interference, or reports of content that violates our policies.

The War Room team also monitors news coverage and election-related activity across other social networks and traditional media in order to identify what type of content may go viral.

These preparations helped a lot during the first round of Brazil’s presidential elections, Facebook claimed.

The social networking giant said its technology detected a false post claiming that Brazil’s Election Day had been moved from October 7 to October 8 due to national protests.

While untrue, that message began to go viral. But the team quickly detected the problem, determined that the post violated Facebook’s policies, and removed it in under an hour.

“And within two hours, we’d removed other versions of the same fake news post,” Chakrabarti said.

Facebook
Facebook App on a smartphone device. (VOA)

The team in the War Room, Facebook said, also helped quickly remove hate speech posts that were designed to whip up violence against people from northeast Brazil after the first round of election results were called.

“The work we are doing in the War Room builds on almost two years of hard work and significant investments, in both people and technology, to improve security on Facebook, including during elections,” Chakrabarti said.

Earlier this month Facebook said that it was planning to set up a task force comprising “hundreds of people” ahead of the 2019 general elections in India.

You May Also Like to Read About- McAfee Introduces New Device-to-Cloud Security Solutions

“With the 2019 elections coming, we are pulling together a group of specialists to work together with political parties,” Richard Allan, Facebook’s Vice President for Global Policy Solutions, told the media in New Delhi.

Facebook has also set a goal of bringing a transparency feature for political ads — now available in the US and Brazil — to India by March next year, Allan informed.

With the new ad architecture in place, people would be able to see who paid for a particular political ad. (IANS)

Next Story

AI Couldn’t Catch NZ Attack Video Streaming: Facebook

Facebook said it was exploring how AI could help it react faster to this kind of content on a live streamed video

0
Facebook, photos
This photograph taken on May 16, 2018, shows a figurine standing in front of the logo of social network Facebook on a cracked screen of a smartphone in Paris. VOA

Facing flak for failure to block the live broadcast of the New Zealand terrorist attack last week, Facebook on Thursday said that its Artificial Intelligence (AI) tools were not “perfect” to detect the horrific video.

Vowing to improve its technology, the social networking giant, however, ruled out adding a time delay to Facebook Live, similar to the broadcast delay sometimes used by TV stations.

“There are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos,” Guy Rosen, Facebook’s Vice President of Integrity, said in a statement.

“More importantly, given the importance of user reports, adding a delay would only further slow down videos getting reported, reviewed and first responders being alerted to provide help on the ground,” Rosen added.

Strapped with a GoPro camera to his head, the gunman broadcast graphic footage of the New Zealand shooting via Facebook Live for 17 minutes, which was later shared in millions on other social media platforms, including Twitter and YouTube.

Fifty people were killed and dozens injured in the shootings at Al Noor Mosque and the Linwood Avenue Masjid in Christchurch on March 15 after 28-year-old Australian Brenton Tarrant opened indiscriminate firings.

Facebook, data, vietnam
This photo shows a Facebook app icon on a smartphone in New York. VOA

The circulation of the video on social media platforms attracted widespread criticism from different quarters.

In a letter to CEOs of Facebook, Twitter, YouTube and Microsoft, House Homeland Security Committee Chairman Bennie Thompson asked the technology companies to brief the US Congress on March 27 regarding their response to dissemination of the video on their platforms.

Thompson also warned the technology companies that unless they do better in removing violent content, the Congress could consider policies to bar such content on social media.

Also Read- Finland Probing Nokia Phones Sending Data to China

Facebook on Thursday said it was exploring how AI could help it react faster to this kind of content on a live streamed video.

“AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it’s not perfect.

“However, this particular video did not trigger our automatic detection systems,” Rosen said, referring to the New Zealand attack video. (IANS)