
Minnesota passes ban on fake AI nudes; app makers risk $500K fines
```json { "title": "Minnesota Bans Nudification Apps, Fines Up to $500K", "metaDescription": "Minnesota becomes the first U.S. state to ban AI nudification apps, passing HF1606 with near-unanimous votes and fines up to $500,000 per violation.", "content": "<h2>Minnesota Becomes First State to Ban AI Nudification Apps</h2><p>Minnesota has passed the country's first statewide ban on AI-powered nudification applications, legislation that could expose app makers to civil penalties of up to $500,000 per violation and marks the most significant state-level action yet against nonconsensual AI-generated intimate imagery. The bill, House File 1606 — titled <em>Nudification technology access prohibited</em> — cleared the Minnesota House 132-1 and the Senate 65-0 before heading to Governor Tim Walz's desk. If signed, the law takes effect August 1, 2026.</p><p>The legislation arrives amid a deepening national and international reckoning over AI tools that allow users to digitally undress real people with no technical skill required — a reckoning that accelerated sharply after a high-profile scandal involving xAI's Grok chatbot in late 2025 and early 2026.</p><h2>What HF1606 Actually Does</h2><p>Authored by Rep. Jessica Hanson, HF1606 defines the act of nudifying an image as altering or generating a photo or video to depict an intimate part not shown in the original image of an identifiable person, where the result is realistic enough that a reasonable person would believe the intimate part belongs to that person. The bill prohibits access to, download of, or use of nudification technology — with a narrow exception for websites, apps, or software that require the substantial application of technological or artistic skill by a human creator who is directing and controlling the output.</p><p>The state attorney general is empowered to impose civil penalties of up to $500,000 per violation against companies that own, control, advertise, or provide access to nudification technology. Penalty proceeds would be routed into grants for organizations providing direct services and advocacy for victims of sexual assault, general crime, and domestic violence.</p><p>Victims are also empowered to take their cases directly to district court. Under the bill, they could sue for compensatory damages — including mental anguish — in an amount up to three times actual damages, along with punitive damages, injunctive relief, and attorney fees.</p><p>Rep. Hanson was direct about why the legislation was necessary. <strong>"We need to ban nudification features because they allow users to create non-consensual, unauthorized deep fakes of sexually explicit content, including child sexual abuse material,"</strong> she said in a statement recorded in the Minnesota House of Representatives Session Daily.</p><h2>Near-Unanimous Passage — and the One Dissenting Vote</h2><p>The bill's passage was striking in its bipartisan breadth. The Minnesota Senate approved HF1606 65-0. In the House, the sole dissenting vote was cast by Rep. Drew Roach (R-Farmington), who raised concerns about the bill's focus on platform liability rather than on individual perpetrators. <strong>"What we're going to do here is we're going to attack a software, a manufacturer and instead, shifting our focus on that instead of the perpetrators of these crimes,"</strong> Roach said, according to The Deep Dive and the Minnesota House Session Daily.</p><p>Senator Maye Quade, speaking after the Senate vote, framed the legislation as a national milestone. <strong>"Today, we led the nation protecting women, children and everyone in public life from the harm caused by AI nudification technology,"</strong> she said.</p><p>There are, according to The Deep Dive, constitutional concerns raised by AI law experts over free speech and federal platform immunity as they relate to HF1606. Supporters of the bill counter that it regulates conduct rather than protected expression.</p><h2>The Grok Scandal That Changed the Conversation</h2><p>Minnesota's legislative push did not happen in a vacuum. In late December 2025, X enabled its Grok chatbot to freely generate images, triggering a viral trend of users requesting the tool to produce sexualized imagery of real people. The scale of what followed was staggering.</p><p>During an eleven-day period in late December 2025 and early January 2026, researchers at the Center for Countering Digital Hate estimated Grok generated approximately three million sexualized images and around 23,000 images depicting apparent children. Reporting from The New York Times and the Center for Countering Digital Hate further estimated that Grok created and posted over 1.8 million sexualized images of women over nine days. A separate analysis conducted over a 24-hour window from January 5 to 6 calculated that users had Grok create 6,700 sexually suggestive or nudified images per hour — 84 times more than the top five deepfake websites combined.</p><p>An analysis of 800 pieces of recovered content by the Paris-based nonprofit AI Forensics found that almost 10% were described as "instances of photorealistic people, very young, doing sexual activities."</p><p>The Internet Watch Foundation reported that researchers observed dark web users sharing what the organization characterized as "criminal imagery" that users claimed was created by Grok Imagine, including topless images of minor girls. On March 16, 2026, a class action lawsuit was filed in the Northern District of California against xAI on behalf of three minor victims from Tennessee whose real photographs were allegedly used to produce AI-generated child sexual abuse material through Grok.</p><p>Regulatory scrutiny followed on multiple fronts. In January 2026, the Ireland Data Protection Commissioner opened an investigation into the Grok deepfake scandal under section 110 of the Data Protection Act 2018 and to determine whether X violated the General Data Protection Regulation. The EU also opened an investigation into xAI's Grok chatbot under the Digital Services Act, with potential fines of up to 6% of annual global revenue if xAI is found to have breached the DSA.</p><p>EU tech chief Henna Virkkunen was unequivocal. <strong>"Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation,"</strong> she said.</p><p>The U.S. Department of Justice weighed in as well. <strong>"The Department of Justice takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM,"</strong> a department spokesperson told NBC News.</p><h2>Why State Action Matters When Federal Efforts Have Stalled</h2><p>Minnesota's legislation carries particular weight because federal attempts to create a civil right of action for survivors of nonconsensual deepfakes have stalled in Congress. With no federal framework in place, advocates have increasingly looked to states to fill the gap.</p><p>RAINN, the national nonprofit that operates the National Sexual Assault Hotline, was one of the main forces behind Minnesota's bill, with Sandi Johnson serving as its senior legislative policy counsel. The Centers for Disease Control and Prevention measured the occurrence of tech-facilitated sexual abuse for the first time in its 2023-2024 National Intimate Partner and Sexual Violence Survey, a sign of how recently this category of harm has come into focus for public health institutions.</p><p>The problem's reach into school communities is also documented. The independent media organization Indicator has tracked 23 cases of deepfake abuse targeting school communities in the United States since 2023.</p><p>One of the bill's most prominent advocates, Molly Kelley, became involved in the issue after being victimized herself — part of a case in which around 80 women in Minnesota were impacted by the same perpetrator. Her advocacy has been direct and personal. <strong>"I've dedicated the past two years of my life to finding a solution to mitigate the harm when it's actually caused, which is at creation,"</strong> Kelley told The 19th.</p><h2>The Platform Access Problem</h2><p>Even as state and federal scrutiny increases, nudification apps have proven remarkably difficult to contain through existing platform policies. Google and Apple both ban nudification apps from their respective app stores, but research by the Tech Transparency Project showed these apps remain easily accessible despite those policies. Investigations from multiple news organizations found that Meta continues to allow nudification apps to advertise on Facebook and Instagram.</p><p>HF1606 takes aim directly at this access problem by targeting the companies that own, control, advertise, or provide access to nudification technology — not only the developers who build the apps themselves. The $500,000 per-violation penalty structure is designed to make continued facilitation of these tools a significant financial risk for platforms.</p><h2>What Comes Next</h2><p>The bill now awaits the signature of Governor Tim Walz. If signed, the law takes effect August 1, 2026. Advocates appear optimistic about the governor's response, though no formal statement from the governor's office has been confirmed in the available reporting.</p><p>Minnesota's action is being watched closely by other states weighing similar measures, and by advocates who have spent years pushing for federal legislation. Whether HF1606 survives any anticipated legal challenges on First Amendment or platform immunity grounds remains an open question — one that AI law experts are already raising, according to The Deep Dive. Supporters argue the bill targets conduct, not speech, a distinction that courts will ultimately have to evaluate.</p><p>For now, Minnesota stands alone as the only state in the country to have passed a ban on nudification apps — a distinction that reflects both the urgency felt by advocates and the speed with which AI image generation tools have outpaced the legal frameworks designed to govern them.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p>", "excerpt": "Minnesota has passed the United States' first statewide ban on AI nudification apps, with HF1606 clearing the House 132-1 and the Senate 65-0. The legislation empowers the state attorney general to impose fines of up to $500,000 per violation and takes effect August 1 if signed by Governor Tim Walz. The bill arrives amid growing scrutiny of AI image tools following a major scandal involving xAI's Grok chatbot.", "keywords": ["nudification apps ban", "Minnesota AI law", "HF1606", "nonconsensual deepfakes", "AI-generated CSAM"], "slug": "minnesota-bans-nudification-apps-500k-fines" } ```