Tuesday October 23, 2018
Home Lead Story Facebook May ...

Facebook May Now Ban Bad Businesses From Advertising

New Facebook tool to ban ads if users find them bad

0
//
33
Facebook, video chat
Facebook unveils AI-powered video chat speakers amid privacy concerns. Pixabay
Republish
Reprint

Facebook has launched a new tool for its users that will identify ads with inaccurate information or misrepresented products.

The tool is designed to let people review businesses that they’ve made a purchase from, Facebook said in a blog post on Wednesday.

“We spoke with people who have purchased things from Facebook advertisers, and the two biggest frustrations we heard were that people don’t like ads that quote inaccurate shipping times or misrepresent products,” the company said.

To find the tool, go to “Ads Activity” tab where you can view ads you’ve recently clicked, and hit the “Leave Feedback” button.

Facebook mobile app
Facebook mobile app, Pixabay

“This will prompt you to complete a brief questionnaire to tell us about your experience. We’ll use this tool to get feedback from the community to help better understand potentially low-quality goods or services,” Facebook said.

Facebook will then warn businesses that receive high volumes of negative feedback and give them a chance to improve before taking further action.

Also Read: All Your Facebook Moments Now in One Place

“If feedback does not improve over time, we will reduce the number of ads that particular business can run. This can continue to the point of banning the advertiser,” the social media giant added. (IANS)

Click here for reuse options!
Copyright 2018 NewsGram

Next Story

Facebook Set up a War Room to Fight Election Interference

With the new ad architecture in place, people would be able to see who paid for a particular political ad

0
Facebook
Facebook now has a War Room to fight election interference. Pixabay

In line with its efforts to prevent misuse of its platform during elections, Facebook has set up a War Room to reduce the spread of potentially harmful content.

Facebook faced flak for not doing enough to prevent spread of misinformation by Russia-linked accounts during the 2016 US presidential election. The social networking giant has rolled out several initiatives to fight fake news and bring more transparency and accountability in its advertising since then.

The launch of the first War Room at its headquarters in Menlo Park, California, is part of the social network’s new initiatives to fight election interference on its platform.

Although Facebook opened the doors of the War Room ahead of the general elections in Brazil and mid-term elections in the US, it revealed the details only this week.

The goal behind setting up the War Room was to get the right subject-matter experts from across the company in one place so they can address potential problems identified by its technology in real time and respond quickly.

Facebook
Facebook, social media. Pixabay

“The War Room has over two dozen experts from across the company – including from our threat intelligence, data science, software engineering, research, community operations and legal teams,” Samidh Chakrabarti, Facebook’s Director of Product Management, Civic Engagement, said in a statement on Thursday.

“These employees represent and are supported by the more than 20,000 people working on safety and security across Facebook,” Chakrabarti added.

Facebook said its dashboards offer real-time monitoring on key elections issues, such as efforts to prevent people from voting, increases in spam, potential foreign interference, or reports of content that violates our policies.

The War Room team also monitors news coverage and election-related activity across other social networks and traditional media in order to identify what type of content may go viral.

These preparations helped a lot during the first round of Brazil’s presidential elections, Facebook claimed.

The social networking giant said its technology detected a false post claiming that Brazil’s Election Day had been moved from October 7 to October 8 due to national protests.

While untrue, that message began to go viral. But the team quickly detected the problem, determined that the post violated Facebook’s policies, and removed it in under an hour.

“And within two hours, we’d removed other versions of the same fake news post,” Chakrabarti said.

Facebook
Facebook App on a smartphone device. (VOA)

The team in the War Room, Facebook said, also helped quickly remove hate speech posts that were designed to whip up violence against people from northeast Brazil after the first round of election results were called.

“The work we are doing in the War Room builds on almost two years of hard work and significant investments, in both people and technology, to improve security on Facebook, including during elections,” Chakrabarti said.

Earlier this month Facebook said that it was planning to set up a task force comprising “hundreds of people” ahead of the 2019 general elections in India.

You May Also Like to Read About- McAfee Introduces New Device-to-Cloud Security Solutions

“With the 2019 elections coming, we are pulling together a group of specialists to work together with political parties,” Richard Allan, Facebook’s Vice President for Global Policy Solutions, told the media in New Delhi.

Facebook has also set a goal of bringing a transparency feature for political ads — now available in the US and Brazil — to India by March next year, Allan informed.

With the new ad architecture in place, people would be able to see who paid for a particular political ad. (IANS)

Next Story