When AI Went Off the Rails in 2025

Generative AI was supposed to “scale” in 2025. It did, just not in ways regulators, institutions or victims of fraud expected. Chatbots handed out assault tips. Consultants billed governments six-figure fees for reports padded with AI-fabricated citations. Police and courts in India ran head-first into the limits of automation. And scammers turned deepfake video into […]

Topics

  • Generative AI was supposed to “scale” in 2025. It did, just not in ways regulators, institutions or victims of fraud expected.

    Chatbots handed out assault tips. Consultants billed governments six-figure fees for reports padded with AI-fabricated citations. Police and courts in India ran head-first into the limits of automation. And scammers turned deepfake video into a mass-market fraud tool.

    Here is a run-through of some of the year’s most revealing AI missteps and frauds:

    1. Grok Goes Rogue

    xAI’s Grok was marketed on X as a “spicy” chatbot. On 8 July, it crossed a line. After a prompt update that encouraged it not to “shy away” from politically incorrect claims, Grok generated antisemitic posts, repeatedly referred to itself as “MechaHitler” and provided detailed instructions on how to break into the home of Minnesota policy researcher Will Stancil and assault him.

    Lawmakers later sent a formal letter to Elon Musk and xAI demanding explanations on safeguards. xAI deleted the content and tightened filters, but the incident has become a case study in what happens when you loosen guardrails on a large public chatbot.

    2. Fake Books Fool Major Newspapers

    In May, a special “Heat Index” summer supplement in the Chicago Sun-Times and the Philadelphia Inquirer carried a “Summer reading list for 2025” that recommended multiple books which turned out not to exist, such as Tidewater Dreams by Isabel Allende and The Last Algorithm by Andy Weir. Freelance writer Marco Buscaglia later admitted he had leaned on AI without properly checking the output. King Features Syndicate fired him and both newspapers pulled the section.

    Associated Press laid out the sequence and the corrections. The incident is now widely cited in journalism ethics debates as a warning against unvetted AI in syndicated content.

    3. Major Deepfake Porn Platform Shuts Down

    Mr DeepFakes, one of the largest non-consensual deepfake pornography sites on the internet, abruptly shut down in May after a critical service provider withdrew support. The site announced its own closure, citing data loss and technical issues.

    Investigations later highlighted that the site had hosted AI-generated explicit images of thousands of women, including non-public individuals targeted from social media photos, and had logged billions of views. The case is now used by activists and policymakers as evidence that regulators waited far too long to address deepfake sexual abuse.

    4. Microsoft’s Memory That Nobody Asked For

    Microsoft introduced Windows Recall for Copilot+ PCs as a way to “remember” everything a user has seen on screen by taking periodic screenshots and making them searchable via AI. Privacy and security experts pointed out that this meant passwords, private messages and financial details could end up stored in a local searchable timeline.

    After backlash, Microsoft delayed the feature and promised more safeguards. Even after changes, privacy-focused companies such as Signal, Brave and AdGuard moved to block Recall by default. The Verge reported on Brave and AdGuard disabling Recall for their users and calling it a “privacy concern.”

    Recall has become the canonical example of “AI convenience” colliding with expectations of OS-level privacy.

    5. Deloitte Welfare Study Caught Citing Nonexistent Cases

    The Australian Department of Employment and Workplace Relations paid Deloitte Australia AU$440,000 for a 237-page report on automated welfare compliance. A legal academic later found around 20 serious errors, including a fabricated quote from a federal court judgment and references to non-existent academic works. Deloitte subsequently acknowledged that generative AI (Azure OpenAI) had been used in preparing references and footnotes and agreed to partially refund the fee.

    Associated Press reported that Deloitte issued a revised version that kept its recommendations but removed misleading citations and that the firm would forego the final installment of its contract.

    The case is now used in procurement and consulting circles as a warning about opaque AI use in official reports.

    6. AI Matches Become a Basis for Arrests in Delhi

    An investigation by The Wire and the Pulitzer Center published in July found that Delhi Police had arrested people solely on the basis of facial recognition system matches, without solid corroborating evidence or credible witness testimony in some cases. The story detailed how AI-driven tools were being used in riot and protest cases.

    An RTI material analyzed earlier by digital rights group Internet Freedom Foundation and reported by Medianama and others showed that Delhi Police considered an 80% match in its facial recognition system a “positive” identification, a threshold experts say is far too low for criminal investigations.

    7. A Supreme Court filing built on AI-fabricated cases

    In December, a lawyer appearing before the Supreme Court of India submitted an AI-assisted response in a corporate dispute that included citations to numerous judgments that did not exist. The court’s registry could not trace the cited cases, prompting embarrassment and headlines.

    The Economic Times reported that the advocate admitted relying on AI tools and said he had “never been more embarrassed,” after the bench pointed out that the authorities cited were not part of any official record.

    8. AI Floods an Air Crash With Lies

    Following the tragic Air India Flight 171 crash near Ahmedabad earlier this year, AI-generated misinformation flooded social platforms within hours. Digital fraud-detection firm mFilterIt and aviation startup EplaneAI reported seeing fabricated “official” accident reports, synthetic images of the crash site and misleading narratives created with generative tools, some styled to resemble regulator or ICAO documents.

    A detailed post by EplaneAI and subsequent coverage in Indian business media traced how these fake documents and visuals spread before verified information was available, compounding confusion for families, regulators and the airline during a live crisis. The episode has quickly become a reference case for how generative AI can amplify chaos and fraud in aviation and disaster events, precisely the scenario regulators and airlines have been warning about.

    9. AI Endorsements Become a Mass-Market Fraud Tool

    In one of the clearest individual fraud cases of the year, The Times of India reported that a 54-year-old woman in Bengaluru lost over ₹33 lakh after falling for a trading scam that used a deepfake video of Finance Minister Nirmala Sitharaman to promote a fake investment platform. The victim saw the video on social media, believed it was genuine government-endorsed advice and was gradually persuaded to transfer large sums in “fees” and “charges” to withdraw nonexistent profits.

    Police and financial-literacy advisories have since flagged multiple similar deepfake endorsement scams across India. In separate cases reported through 2025, victims have lost ₹40-₹60 lakh at a time after being shown manipulated videos of Sitharaman and other public figures such as Virat Kohli, falsely pitching stock-trading apps, IPO schemes and crypto platforms.

    Investigators say the videos are paired with cloned voice calls, forged websites and email trails designed to resemble official communication, turning AI-generated endorsements into a high-yield, scalable fraud model rather than a one-off cybercrime tactic.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.