Clearview AI scraped 30 Billion Images from Facebook to build its facial recognition database and then Gave the Photos to Cops. Police have used the database a million times

US police have used the database nearly a million times, the company’s CEO told the BBC.

One digital rights advocate told Insider the company is “a total affront to peoples’ rights, full stop.”

From [HERE] A controversial facial recognition database, used by police departments across the nation, was built in part with 30 billion photos the company scraped from Facebook and other social media users without their permission, the company’s CEO recently admitted, creating what critics called a “perpetual police line-up,” even for people who haven’t done anything wrong. The company, Clearview AI, boasts of its potential for identifying rioters at the January 6 attack on the Capitol, saving children being abused or exploited, and helping exonerate people wrongfully accused of crimes. But critics point to wrongful arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans. Clearview took photos without users’ knowledge, its CEO Hoan Ton-That acknowledged in an interview last month with the BBC. Doing so allowed for the rapid expansion of the company’s massive database, which is marketed on its website to law enforcement as a tool “to bring justice to victims.” Ton-That told the BBC that Clearview AI’s facial recognition database has been accessed by US police nearly a million times since the company’s founding in 2017, though the relationships between law enforcement and Clearview AI remain murky and that number could not be confirmed by Insider.