Facebook says that anyone who takes out a British political ad on the social media platform will now be forced to reveal their identity, in a bid to increase transparency and curb misinformation.
The company said Tuesday that it will also require disclaimers for any British political advertisements. All the data on the ad buyers will be archived for seven years in a publicly accessible database.
Facebook is already applying a similar system in the United States, which is holding midterm elections this year.
British lawmakers have called for greater oversight of social media companies and election campaigns to protect democracy in the digital age.
A House of Commons report this year said democracy is facing a crisis because data analysis and social media allow campaigns to target voters with messages of hate without their consent.
“While the vast majority of ads on Facebook are run by legitimate organizations, we know that there are bad actors that try to misuse our platform,” Facebook said in a statement. “By having people verify who they are, we believe it will help prevent abuse.”
Facebook said it’s up against “smart and well-funded adversaries who change their tactics as we spot abuse,” but it believes that increased transparency is good for democracy and the electoral process. (VOA)
Stung by spread of fake news and privacy violations, Facebook on Monday announced several new tools to protect 2020 US elections from being manipulated by nation-state bad actors, and avoid the repeat of 2018 presidential elections hit by Russian interference.
The social networking giant launched “Facebook Protect” to secure the accounts of elected officials, candidates, their staff and others who may be particularly vulnerable to targeting by hackers and foreign adversaries.
“Beginning today, Page admins can enroll their organization’s Facebook and Instagram accounts in ‘Facebook Protect’ and invite members of their organization to participate in the programme as well,” said three top Facebook executives in a lengthy blog post.
Participants will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices.
“If we discover an attack against one account, we can review and protect other accounts affiliated with that same organization that are enrolled in our programme,” said Guy Rosen, VP of Integrity at Facebook.
The company said it has seen people failing to disclose the organization behind their Page as a way to make people think that a Page is run independently.
To address this, Facebook is adding more information about who is behind a Page, including a new “Organizations That Manage This Page” tab that will feature the Page’s “Confirmed Page Owner”, including the organization’s legal name and verified city, phone number or website.
Initially, this information will only appear on Pages with large US audiences that have gone through Facebook’s business verification.
A new US Presidential candidate spend tracker will share ad details across national, state and regional levels.
“We’ll also make it clear if an ad ran on Facebook, Instagram, Messenger, or the Audience Networks,” said Facebook.
Next month, Facebook will begin labelling media outlets that are wholly or partially under the editorial control of their government as state-controlled media.
This label will be on both their Page and in Facebook Ad Library.
“We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state,” said Katie Harbath, Public Policy Director, Global Elections.
Facebook said it will update the list of state-controlled media on a rolling basis beginning in November.
In early 2020, Facebook plans to expand its labeling to specific posts and apply these labels on Instagram as well.
The company said that over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share.
“The labels below will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker,” informed Nathaniel Gleicher, Head of Cybersecurity Policy and Rob Leathern, Director of Product Management.
Facebook also announced an initial investment of $2 million to support projects that empower people to determine what to read and share – both on Facebook and elsewhere. (IANS)