Twitter has told law enforcement app maker Clearview AI to stop downloading images from its platform to build its facial recognition databases as it violates its policies.
The controversial app uses over three billion images to find a match. These images have been sourced from various social media sites including Facebook, YouTube and Twitter.
According to The New York Times, Twitter has sent a letter to the startup, saying it must stop collecting photos and other data from its platform “for any reason” and delete any photo that it previously collected.
The cease-and-desist letter accused Clearview AI of violating Twitter’s policies.
Clearview AI app is being used by over 600 law enforcement agencies including the Federal Bureau of Investigation and Department of Homeland Security.
According to law enforcement officials, the app had helped them identify suspects in many criminal cases.
The New York-based Clearview AI is not available to the public and a visit to its website yields no result for the common people. Who owns the startup is sort of mysterious.
“Clearview searches the open web. Clearview does not and cannot search any private or protected info, including in your private social media accounts,” says the information on its website.
According to it, “Clearview’s technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud”.
However, people are concerned about such mysterious apps.
“As the news of this app spread, women everywhere sighed. Once again, women’s safety both online and in real life has come second place to the desire of tech startups to create a” and monetize a” ever more invasive technology,” Jo O’Reilly, a privacy advocate with Britain-based non-profit ProPrivacy, told Digital Trends. (IANS)