Social networking giant Facebook is developing a camera-equipped set-top box for TVs that would support functionalities like video-calling, a media report said.
Internally codenamed “Ripley”, the device would use Artificial Intelligence (AI) to automatically detect and follow people as they move through the frame during a video call, news website Cheddar reported on Tuesday.
Apart from facilitating video-chat, the device could also help Facebook compete with the likes of Apple and Amazon in the TV-segment.
In October, the social networking major launched its smart-speakers — “Portal” — which incorporated AI technology to follow user-movements while on a video-chat amd remove unwanted background noise during a call.
Priced at $199, sporting a 10-inch display, built-in Amazon Alexa support and pre-loaded with Facebook’s own “Watch” video service, the smart speakers would begin shipping in November.
With projects like “Portal” and “Ripley” Facebook is trying to build a consumer-hardware business outside of its virtual reality brand ‘Oculus’ that was acquired by the social networking giant in March 2014 for nearly $2 billion.
Facebook declined to comment on the subject, the report added. (IANS)
Stung by spread of fake news and privacy violations, Facebook on Monday announced several new tools to protect 2020 US elections from being manipulated by nation-state bad actors, and avoid the repeat of 2018 presidential elections hit by Russian interference.
The social networking giant launched “Facebook Protect” to secure the accounts of elected officials, candidates, their staff and others who may be particularly vulnerable to targeting by hackers and foreign adversaries.
“Beginning today, Page admins can enroll their organization’s Facebook and Instagram accounts in ‘Facebook Protect’ and invite members of their organization to participate in the programme as well,” said three top Facebook executives in a lengthy blog post.
Participants will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices.
“If we discover an attack against one account, we can review and protect other accounts affiliated with that same organization that are enrolled in our programme,” said Guy Rosen, VP of Integrity at Facebook.
The company said it has seen people failing to disclose the organization behind their Page as a way to make people think that a Page is run independently.
To address this, Facebook is adding more information about who is behind a Page, including a new “Organizations That Manage This Page” tab that will feature the Page’s “Confirmed Page Owner”, including the organization’s legal name and verified city, phone number or website.
Initially, this information will only appear on Pages with large US audiences that have gone through Facebook’s business verification.
A new US Presidential candidate spend tracker will share ad details across national, state and regional levels.
“We’ll also make it clear if an ad ran on Facebook, Instagram, Messenger, or the Audience Networks,” said Facebook.
Next month, Facebook will begin labelling media outlets that are wholly or partially under the editorial control of their government as state-controlled media.
This label will be on both their Page and in Facebook Ad Library.
“We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state,” said Katie Harbath, Public Policy Director, Global Elections.
Facebook said it will update the list of state-controlled media on a rolling basis beginning in November.
In early 2020, Facebook plans to expand its labeling to specific posts and apply these labels on Instagram as well.
The company said that over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share.
“The labels below will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker,” informed Nathaniel Gleicher, Head of Cybersecurity Policy and Rob Leathern, Director of Product Management.
Facebook also announced an initial investment of $2 million to support projects that empower people to determine what to read and share – both on Facebook and elsewhere. (IANS)