San Francisco, October 17, 2017 : Facebook has acquired ‘tbh’, an anonymous polling app for teenagers which has over 5 million downloads and 2.5 million daily active users in the US.
The app lets teenagers anonymously answer kind-hearted, multiple-choice questions about friends, who then receive the poll results as compliments, TechCrunch reported on Tuesday.
“When we set out to build tbh, we wanted to create a community that made us feel happier and more confident about ourselves. We felt that people craved genuine and positive interactions in their online experiences,” ‘tbh’ said in a statement.
“Over the last few weeks, over 5 million people have downloaded tbh and sent over a billion messages. More importantly, we’ve been inspired by the countless stories where tbh helped people recover from depression and form better relationships with friends,” it read.
Financial terms of the deal weren’t disclosed but according to TechCrunch, it is likely to be somewhere around less than $100 million and will not require regulatory approval.
“As part of the deal, tbh’s four co-creators — Bier, Erik Hazzard, Kyle Zaragoza and Nicolas Ducdodon — will join Facebook’s Menlo Park headquarters while continuing to grow their app,” the report added.
“When we met with Facebook, we realised that we shared many of the same core values about connecting people through positive interactions. Most of all, we were compelled by the ways they could help us realise tbh’s vision and bring it to more people,” ‘tbh’ said.
In a statement to TechCrunch, Facebook said: “tbh and Facebook share a common goal — of building community and enabling people to share in ways that bring us closer together”. (IANS)
To stop false news from spreading on its platform, Facebook has said it put in place a three-pronged strategy that constitutes removing accounts and content that violate its policies, reducing distribution of inauthentic content and informing people by giving them more context on the posts they see.
Another part of its strategy in some countries is partnering with third-party fact-checkers to review and rate the accuracy of articles and posts on Facebook, Tessa Lyons, a Facebook product manager on News Feed focused on false news, said in a statement on Thursday.
The social media giant is facing criticism for its role in enabling political manipulation in several countries around the world. It has also come under the scanner for allegedly fuelling ethnic conflicts owing to its failure stop the deluge of hate-filled posts against the disenfranchised Rohingya Muslim minority in Myanmar.
“False news is bad for people and bad for Facebook. We’re making significant investments to stop it from spreading and to promote high-quality journalism and news literacy,” Lyons said.
Facebook CEO Mark Zuckerberg on Tuesday told the European Parliament leaders that the social networking giant is trying to plug loopholes across its services, including curbing fake news and political interference on its platform in the wake of upcoming elections globally, including in India.
Lyons said Facebook’s three-pronged strategy roots out the bad actors that frequently spread fake stories.
“It dramatically decreases the reach of those stories. And it helps people stay informed without stifling public discourse,” Lyons added.
Although false news does not violate Facebook’s Community Standards, it often violates the social network’s polices in other categories, such as spam, hate speech or fake accounts, which it removes remove.
“For example, if we find a Facebook Page pretending to be run by Americans that’s actually operating out of Macedonia, that violates our requirement that people use their real identities and not impersonate others. So we’ll take down that whole Page, immediately eliminating any posts they made that might have been false,” Lyons explained.
Apart from this, Facebook is also using machine learning to help its teams detect fraud and enforce its policies against spam.
“We now block millions of fake accounts every day when they try to register,” Lyons added.
A lot of the misinformation that spreads on Facebook is financially motivated, much like email spam in the 90s, the social network said.
If spammers can get enough people to click on fake stories and visit their sites, they will make money off the ads they show.
“We’re figuring out spammers’ common tactics and reducing the distribution of those kinds of stories in News Feed. We’ve started penalizing clickbait, links shared more frequently by spammers, and links to low-quality web pages, also known as ‘ad farms’,” Lyons said.
“We also take action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution,” Lyons said.
Facebook said it does not want to make money off of misinformation or help those who create it profit, and so such publishers are not allowed to run ads or use its monetisation features like Instant Articles. (IANS)