AI firm nabs 30 billion online photos and sends them to law enforcement to prosecute Americans
As much of a hit as personal privacy has already taken in the Internet age, whatever shred of it was left has now been completely ripped away thanks to artificial intelligence.
The CEO of Clearview AI, a widely criticized tech firm infamous for invading people's privacy, has admitted that the company has scraped approximately 30 billion photos from social media sites. Clearview AI has curated and compiled these photos,
providing them to surveillance state authorities to use as they see fit, all in secrecy and without any oversight.
This aligns perfectly, by the way, with the warnings from the Founders about the unchecked power of authorities.
Clearview AI has reportedly collaborated with law enforcement agencies to provide them with information to aid in identifying and prosecuting individuals involved in the January 6 insurrection, who are currently being pursued by the FBI,
PJ Media reported earlier in the week.
Clearview AI promotes its ability to identify individuals involved in the January 6 attack on the Capitol, assist in preventing child abuse or exploitation, and exonerate those who have been wrongly accused of crimes. However, critics have raised concerns about privacy violations and wrongful arrests caused by inaccurate identifications made through facial recognition technology. Examples of such incidents include cases in Detroit and New Orleans,
Business Insider noted further.
Clearview AI's CEO, Hoan Ton-That, admitted
in an interview with the BBC last month that the company obtained photos without users' knowledge. The approach enabled the company to swiftly expand its extensive database, which is marketed on its website as a tool for law enforcement to "bring justice to victims."
Ton-That revealed that US law enforcement has accessed Clearview AI's facial recognition database nearly one million times since its establishment in 2017. However, the exact nature of the relationships between law enforcement agencies and Clearview AI is unclear, and the number cited could not be independently verified by
Insider.
In a statement emailed to
Insider, Ton-That said: "Clearview AI's database of publicly available images is lawfully collected, just like any other search engine like Google."
He added: "Clearview AI's database is used for after-the-crime investigations by law enforcement, and is not available to the general public. Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person."
The invasive nature of facial recognition technology has generated significant criticism from both privacy advocates and digital platforms. In 2020, several major social media companies, including Facebook, issued cease-and-desist letters to Clearview AI for violating their users' privacy,
Insider noted further.
"Clearview AI's actions invade people's privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services," a spokesperson for Meta, which owns Facebook, told the outlet.
Since that time, the spokesperson further noted, Meta has "made significant investments in technology" and devotes "substantial team resources to combating unauthorized scraping on Facebook products."
When Facebook engineers detect scraping, the company may take action "such as sending cease and desist letters, disabling accounts, filing lawsuits, or requesting assistance from hosting providers" to protect users' data, the spokesperson added.
Despite internal policies, once Clearview AI has obtained a photo, the individual's biometric face print is created and matched within the database to link them to their social media profiles and other identifying information permanently. Unfortunately, people captured in these photos have limited options to remove themselves from the database, notes
Insider.
"Clearview is a total affront to peoples' rights, full stop, and police should not be able to use this tool," said Caitlin Seeley George, director of campaigns and operations for Fight for the Future, a digital rights nonprofit group. She added that "without laws stopping them,
police often use Clearview without their department's knowledge or consent, so Clearview boasting about how many searches is the only form of 'transparency' we get into just how widespread use of facial recognition is," according to
Insider.
Sources include:
BusinessInsider.com