App stores are steering users to ‘nudify’ deepfake tools — and kids can find them

App stores are steering users to ‘nudify’ deepfake tools — and kids can find them

Who knew a search box could be the smallest, most dangerous gateway to nonconsensual pornography?

A new investigation by the Tech Transparency Project finds that both Apple’s App Store and Google Play are not just hosting so‑called “nudify” apps — they’re pointing users toward them. Type terms like “nudify,” “undress,” or “deepnude” and the stores’ search results, autocomplete suggestions, and sponsored carousels return apps that can turn ordinary photos into sexualized images or even pornographic videos. The tally is stark: the apps the probe examined have been downloaded roughly 483 million times and generated about $122 million in lifetime revenue, with 31 of them rated as suitable for minors.

How the stores surface the problem

TTP’s team created fresh accounts on iOS and Android, ran a battery of searches, and tested the top results. Roughly four in ten of the apps returned in those searches could render women nude or scantily clad, the project found. Some of the apps simply offer image editing and templates; others combine face swaps, image-to-video tools and chatbots that can be prompted into sexualized behavior.

Worse, the app stores contributed to discovery: autocomplete prompts led researchers to more nudify tools, and Apple and Google sometimes placed ads for those very apps at the top of search results. Sponsored carousels in Google Play included apps with explicit templates and video generators. Apple’s paid search placements showed up as the first result in several tests.

Developers tested by TTP offered a range of defenses: some said they’d tightened moderation after being contacted; others pointed to third‑party moderation tools (one developer mentioned integrating the OpenAI moderation API). Still, several apps that previously allowed explicit nudification only dialed back to bikinis — a change that reduces obvious graphic nudity but continues to raise concerns about objectification and nonconsensual misuse.

Real risks beyond blurred thumbnails

This isn’t an abstract worry. The ability to create convincing sexualized images of real people has already produced school scandals and harassment campaigns. That threat multiplies when the tools are easy to find and imprecisely age‑gated. TTP’s testing used AI‑generated photos of fictional women to avoid harming real people — but the same workflows work on anyone’s selfie.

There are also privacy and national‑security angles. Some apps are developed or hosted under jurisdictions whose laws could compel data sharing. One app’s privacy policy placed it under China’s legal regime, which federal warnings have linked to potential data access risks. Uploading sensitive portraits to services that store or process images in another country adds another layer of danger for victims.

Why the companies’ incentives matter

App stores make money two ways here: cuts of in‑app purchases and placement fees from advertisers. That revenue stream creates a perverse incentive to let borderline or rule‑breaking apps slip through, or at least to keep running their ads until a complaint lands. Apple and Google say they remove apps that violate policies; both companies did pull some apps after TTP and other outlets flagged them. But removal often feels reactive rather than preventative.

Policies on paper are fairly explicit: Apple bans “overtly sexual or pornographic material,” and Google forbids apps that “promote sexual content” or encourage degrading people. The new reporting suggests enforcement and the ad-targeting machinery aren’t aligned with those rules.

Not just images — chatbots and videos too

The problem spreads beyond single photos. Some apps create AI companions that can be built from uploaded images and prompted into sexual role‑play. Others will swap faces onto explicit bodies or generate short videos where a person appears to undress. That mix of image generation, face swap tech and conversational agents is part of a broader sweep of generative AI tools — the same class of models that power legitimate creative apps and experimental chatbots. For wider context on how companion bots and role‑play are already affecting teens, see reporting on teens and AI chatbot companions. And for the technical momentum behind increasingly capable generative models, look to developments like Gemma 4 and other open models.

What’s changing — and what could change faster

After the expose, Apple removed a number of apps; Google suspended several as well. Some developers bumped age ratings or said they’d tightened filters. But those are stopgap measures. The problem the investigation highlights is systemic: search algorithms, ad systems, and global developer ecosystems that let bad actors iterate rapidly.

Policymakers are already paying attention. The UK has begun targeting these apps, and scrutiny is likely to spread as more victims come forward. Possible levers include stricter ad vetting inside app stores, faster manual review for apps that surface on sensitive queries, tighter enforcement of age ratings, and penalties for apps that repeatedly enable nonconsensual deepfakes.

Regulation alone won’t fix everything; app stores must redesign how discovery works for risky categories. Automated moderation is blunt; human review is expensive. A hybrid approach — targeted human review for search queries that map to high‑harm outcomes, transparent takedown metrics, and tighter rules on ad placements for image‑editing tools — would at least make it harder to stumble into harm.

A caution for users

If you use AI image or face‑swap apps, treat uploads as if they might be stored or seen by third parties. Read age ratings and privacy policies, and avoid uploading photos of other people without explicit consent. Parents and schools should be aware that tools capable of sexual deepfakes are not hiding — they’re surfacing in places many assume are safe for kids.

This episode is a reminder that technology doesn’t only create new capabilities; it reshuffles incentives and gaps in oversight. App stores have the levers to make discovery safer. Whether they’ll use them quickly enough to stop the next harmful viral image is the question now moving from investigators’ notebooks into courtrooms and parliamentarians’ agendas.

DeepfakeApp StoreGoogle PlayAI SafetyPrivacy

Comments

Sign in to join the discussion

Loading comments...