Thursday March 21, 2019
Home India Can’t l...

Can’t leave 160 million WhatsApp users ‘trapped in a corridor of Charity’ : Supreme Court

0
//
Whatsapp, Pixabay

New Delhi, May 16, 2017: Online messaging service WhatsApp on Tuesday sought to assure the Supreme Court that it has never shared the contents of the messages between its users with third parties even as the top court said that it could not leave 160 million users “trapped in a corridor of charity”.

The five-judge constitution bench headed by Justice Dipak Misra indicated that it would examine the 2016 privacy policy of the online messaging app on the reopening of the court after its summer vacations.

NewsGram brings to you latest new stories in India.

Besides Justice Misra, the bench includes Justice A.K. Sikri, Justice Amitava Roy, Justice A.M. Khanwilkar and Justice Mohan M. Shantanagoudar.

The court said that it would examine the new privacy policy, which WhatsApp had brought in 2016, after it was acquired by the social networking site Facebook, on the grounds whether it was contrary to public policy and whether it was required to be put to constitutional controls.

However, this would happen only if the court comes to conclusion it required judicial interference, said the bench in course of the hearing of a plea by petitioners Karmanya Singh Sareen and Shreya Sethi who have challenged the Delhi High Court’s September 23, 2016 order allowing WhatsApp to roll out its new privacy policy but stopping it from sharing the data of its users collected up to September 25, 2016, with Facebook or any other related company.

Go to NewsGram and check out news related to political current issues.

Tuesday was the second day of the hearing and further hearing would take place after top court reopens after summer vacations. On Monday, the court had asked WhatsApp why it changed its policy of non-sharing of data of users after its acquisition by Facebook ito permit sharing of the attributes of its users.

Resuming his arguments on the maintainability of the petitions challenging the Delhi High court verdict, senior counsel K.K. Venugopal, appearing for Facebook, said: “We can file an affidavit stating that not a single piece of information has been shared with anybody. Even I cannot access the information if I want to. There is no element of human intervention in the process. Machines take care of this.”

Look for latest news from India in NewsGram.

He said that any fundamental right – be it of communication or choice of communication – could only be invoked against the state and not against a private entity like WhatsApp, which was not discharging public functions. He argued that petitioners challenging its 2016 new privacy policy will have to first approach the regulatory authorities – TRAI.

He said that regulations framed under the Information Technology Act in 2009 and 2011 covered WhatsApp – a position contested by the petitioners who are contending that these regulations have been outpaced by the technological advancements.

Reiterating that it was in no position to go into the contends of the messages exchanged between its users as they were in encrypted form, senior counsel Siddharth Luthra, appearing for WhatsApp, told the bench that it was not generating meta data and all that was being shared was contact details, profile photo and status of the users of the App.

Lawyer Madhvi Divan, appearing for the petitioners, said that WhatsApp was using public resource like spectrum and was performing public functions.

Comparing it with telephone services, Divan said while one was paying for availing telephone services, WhatsApp was free but describing its operation as “economic espionage in the name of free service”, urged the bench to look at their business model. (IANS)

Next Story

4,000 Viewed NZ Mosques Shootings Live, Claims Facebook

Facebook said it removed the original video and hashed it to detect other shares visually similar to that video and automatically remove them from Facebook and Instagram

0
facebook, social media
Facebook, Messenger and Instagram apps are displayed on an iPhone, March 13, 2019, in New York. Facebook said it is aware of outages on its platforms including Facebook, Messenger and Instagram. VOA

Facing the flak over its inability to spot and remove the livestreaming of New Zealand mosque’s shooting, Facebook on Tuesday said 4,000 people viewed it before being taken down.

“The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast,” Chris Sonderby, VP and Deputy General Counsel, said in a blog-post. “Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook,” Sonderby added.

Strapped with a GoPro camera to his head, the gunman broadcasted graphic footage of shooting via Facebook Live for nearly 17 minutes. It was later shared in millions on other social media platforms.

Fifty people were killed in the shootings at Al Noor Mosque and the Linwood Avenue Masjid in Christchurch on March 15 after 28-year-old Australian national Brenton Tarrant opened indiscriminate firings.

According to Facebook, the first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended. “Before we were alerted to the video, a user on ‘8chan’ posted a link to a copy of the video on a file-sharing site,” said Sonderby.

Facebook, photos
This photograph taken on May 16, 2018, shows a figurine standing in front of the logo of social network Facebook on a cracked screen of a smartphone in Paris. VOA

“We removed the personal accounts of the named suspect from Facebook and Instagram, and are identifying and removing any imposter accounts that surface,” he said.

Facebook said it removed the original video and hashed it to detect other shares visually similar to that video and automatically remove them from Facebook and Instagram.

Also Read- Netflix Not to Integrate its Services with Apple Streaming Platform

“Some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems, including the use of audio technology,” Sonderby said.

“In the first 24 hours, we removed about 1.5 million videos of the attack. More than 1.2 million of those videos were blocked at upload, and were therefore prevented from being seen on our services,” he said. (IANS)