Wednesday March 20, 2019
Home Lead Story Facebook Roll...

Facebook Rolls Out New Tool that Lets Journalists Examine Political Ads

It also shows demographics of people reached, including age, gender and location

0
//
Facebook
New Facebook tool lets journalists scrutinise political ads. Pixabay

With midterm elections in the US and general elections in several other countries knocking at the door, Facebook has rolled out a new tool that makes it easier for researchers and journalists to scrutinise Facebook ads related to politics or issues of national importance.

“We’re making advertising more transparent to help prevent abuse on Facebook, especially during elections,” Rob Leathern, Facebook’s Director of Product Management said in a statement on Wednesday.

Facebook said its new tool, Ad Archive API, would initially be available to a group of publishers, academics and researchers in the US before opening it up more broadly.

Facebook
Facebook, social media. Pixabay

“Input from this group will also form the basis of an Archive report that will be available starting in September,” Leathern said.

The API offers ad creative, start and end date, and performance data, including total spend and impressions for ads.

You May Also Like to Read About the Reason Australia Banned Huawei from Selling 5G Tech- Australia Bans Chinese Tech Huawei From Selling 5G Tech Over Security Concerns

It also shows demographics of people reached, including age, gender and location.

“We’re greatly encouraged by trends and insights that watchdog groups, publishers and academics have unearthed since the archive launched in May. We believe this deeper analysis will increase accountability for both Facebook and advertisers,” Leathern said. (IANS)

Next Story

4,000 Viewed NZ Mosques Shootings Live, Claims Facebook

Facebook said it removed the original video and hashed it to detect other shares visually similar to that video and automatically remove them from Facebook and Instagram

0
facebook, social media
Facebook, Messenger and Instagram apps are displayed on an iPhone, March 13, 2019, in New York. Facebook said it is aware of outages on its platforms including Facebook, Messenger and Instagram. VOA

Facing the flak over its inability to spot and remove the livestreaming of New Zealand mosque’s shooting, Facebook on Tuesday said 4,000 people viewed it before being taken down.

“The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast,” Chris Sonderby, VP and Deputy General Counsel, said in a blog-post. “Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook,” Sonderby added.

Strapped with a GoPro camera to his head, the gunman broadcasted graphic footage of shooting via Facebook Live for nearly 17 minutes. It was later shared in millions on other social media platforms.

Fifty people were killed in the shootings at Al Noor Mosque and the Linwood Avenue Masjid in Christchurch on March 15 after 28-year-old Australian national Brenton Tarrant opened indiscriminate firings.

According to Facebook, the first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended. “Before we were alerted to the video, a user on ‘8chan’ posted a link to a copy of the video on a file-sharing site,” said Sonderby.

Facebook, photos
This photograph taken on May 16, 2018, shows a figurine standing in front of the logo of social network Facebook on a cracked screen of a smartphone in Paris. VOA

“We removed the personal accounts of the named suspect from Facebook and Instagram, and are identifying and removing any imposter accounts that surface,” he said.

Facebook said it removed the original video and hashed it to detect other shares visually similar to that video and automatically remove them from Facebook and Instagram.

Also Read- Netflix Not to Integrate its Services with Apple Streaming Platform

“Some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems, including the use of audio technology,” Sonderby said.

“In the first 24 hours, we removed about 1.5 million videos of the attack. More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services,” he said. (IANS)