A new study conducted by social network analysis company
Graphika found that
apps and websites using artificial intelligence (AI) to digitally undress women in photos are experiencing sudden popularity.
According to the report by
Graphika, an alarming 24 million people have been visiting undressing websites and using the so-called "nudify" apps. The study found that these apps operate on a "freemium model," offering basic services for free while requiring payment for enhanced features. Prices for additional "credits" or "tokens" range from $1.99 to $299. (Related:
Ads for AI GIRLFRIENDS flooding social media platforms.)
Links that advertise undressing apps on popular social networks such as X, formerly Twitter, Google's YouTube and Reddit have increased by more than 2,400 percent. Additionally, 52 Telegram groups have been identified as channels for accessing non-consensual intimate imagery (NCII) services.
Researchers involved in the study warn that the increasing accessibility of these services may lead to the creation and dissemination of non-consensual nude images, targeted harassment campaigns, sextortion and
child sexual abuse material.
Advertisements for these apps openly offer "undressing" services and showcase "undressed" photos as proof. Some ads masquerade as "AI art services" or "web3 photo galleries," concealing their true nature. YouTube tutorial videos on how to use "nudify" apps continue to circulate, with some titles explicitly encouraging the creation of NSFW (Not Safe for Work) AI images.
The unauthorized sharing of explicit content involving public figures has been a problem on the internet for a while.
"We are seeing more and more of this being done by ordinary people with ordinary targets. You see it among high school children and people who are in college," said Eva Galperin, the director of cybersecurity at the Electronic Frontier Foundation
Experts are worried that improvements in AI technology will make it easier and more effective to create fake content. A lot of victims may not even know about these fake images. And for those who do, it can be tough to get the police because there is no federal law banning the creation of deepfake pornography.
Undressing apps commonly victimize teenage girls
As the use of AI-powered "nudify" apps continues to surge,
victims are coming forward to share their harrowing experiences. Most of them are minors.
In a recent incident at a New Jersey high school, female students found themselves at the center of a deeply distressing situation when AI-generated nude images circulated throughout the school. A mother and her 14-year-old daughter, deeply affected by the incident, are now advocating for stronger protections against NCII content.
Similar stories are emerging from across the U.S., with a high school in Seattle, Washington, facing a comparable situation earlier this year. Reports suggest that a teenage boy allegedly used AI deepfake apps to create explicit images of female students.
Meanwhile, in September, more than 20 girls became victims of deepfake photos circulated via the Clothoff app, which allows users to undress women in photos without their consent.
Psychotherapist Lisa Sanfilippo emphasized the profound violation of privacy and the intense trauma experienced by victims. She explained that when you see pictures of yourself or pictures that are changed to look like you doing things that you might find really bad, scary or only meant for your private life, it can be very upsetting and even cause trauma.
"There is no ability to give consent there. It's abuse when someone takes something from another person that has not been freely given to them."
Head over to
FutureTech.news for more news about AI.
Watch this clip of the Ai-Da humanoid robot, which incorporates AI,
testifying before the British House of Lords.
This video is from the
Puretrauma357 channel on Brighteon.com.
More related stories:
Elon Musk announces creation of new AI company after spending YEARS criticizing rapid AI development.
Amazon’s new generative AI assistant "Q" has "severe hallucinations" and leaks confidential data.
Google unveils new AI medicine robots: Are HUMAN doctors about to become obsolete?
Stunning: Microsoft’s new AI chatbot says it wants to create deadly virus, steal nuclear launch codes.
What are the risks posed by artificial general intelligence?
Sources include:
Time.com
DailyMail.co.uk