Elite law firm Sullivan & Cromwell admits to AI ‘hallucinations’ - Financial Times

Elite law firm Sullivan & Cromwell admits to AI ‘hallucinations’ - Financial Times

```json { "title": "Sullivan & Cromwell Apologizes for AI Hallucinations in Court Filing", "metaDescription": "Elite law firm Sullivan & Cromwell admitted AI hallucinations produced ~40 errors in a bankruptcy filing, despite having formal AI governance policies in place.", "content": "<h2>Sullivan & Cromwell Admits AI Hallucinations in High-Profile Bankruptcy Case</h2><p>In one of the highest-profile admissions of its kind, Wall Street powerhouse Sullivan & Cromwell issued a formal apology to a federal bankruptcy judge on April 18, 2026, acknowledging that AI-generated hallucinations had contaminated court filings in a major Chapter 15 bankruptcy proceeding. The letter, signed by Andrew Dietderich, co-head of the firm's restructuring practice, was addressed to Chief Judge Martin Glenn of the US Bankruptcy Court for the Southern District of New York. The incident has sent shockwaves through the legal industry and raised urgent questions about whether even elite law firms with formal AI governance frameworks are equipped to manage the risks of generative AI in high-stakes legal work.</p><p>Sullivan & Cromwell, which ranks 30th on the AmLaw Global 200 by revenue and recorded total revenues of $2.05 billion in 2024, is one of the most prestigious law firms in the world. Its partners bill at rates ranging from $1,995 to $2,890 per hour, according to billing rates the firm filed in a separate bankruptcy case in 2026. With a 147-year history and more than 1,000 attorneys, the firm's public apology to a sitting federal judge is extraordinary by any measure.</p><h2>What Went Wrong: ~40 Errors, Fabricated Citations, and a Policy That Wasn't Followed</h2><p>The erroneous filings were submitted on April 8–9, 2026, in the Chapter 15 bankruptcy proceedings of Prince Global Holdings (case: <em>Prince Global Holdings Limited and Paul Pretlove</em>, Bankr. S.D.N.Y., 1:26-bk-10769). Sullivan & Cromwell was representing the Joint Provisional Liquidators of the case — Paul Pretlove, David Standish, and James Drury — who oversee 30 British Virgin Islands-incorporated entities associated with the Prince Group.</p><p>The filing contained approximately 40 incorrect citations and other errors caused by AI hallucinations, according to reporting by CoinTelegraph. The documents appeared to cite wrong or fabricated cases and numbers, and included inaccurate titles of articles. Opposing counsel at Boies Schiller Flexner — representing creditors in the case — flagged to the court that words Sullivan & Cromwell quoted in its motion "do not appear in chapter 15 of the US Bankruptcy Code," and pointed to "multiple cited decisions" that were "misquoted or misidentified."</p><p>The errors were not caught internally. In his April 18 apology letter, Dietderich acknowledged that the firm's secondary citation-review process failed to identify the problems. "Regrettably, this review process did not identify the inaccurate citations generated by AI, nor did it identify other errors that appear to have resulted in whole or in part from manual error," he wrote.</p><p>In the same letter, Dietderich was direct about the firm's AI governance failures: "The Firm maintains comprehensive policies and training requirements governing the use of AI tools in legal work. These safeguards are designed to prevent exactly this situation. The Firm's policies on the use of AI were not followed in connection with the preparation of the Motion."</p><p>He went further, noting that the firm's policies include mandatory training modules that explicitly address the risk of hallucinations. "It instructs lawyers to 'trust nothing and verify everything' and makes clear that failure to independently verify AI-generated output constitutes a violation of Firm policy," Dietderich wrote.</p><p>The apology letter was accompanied by a detailed Schedule A cataloguing dozens of corrections across multiple documents, including revised case citations and corrected quotations. Sullivan & Cromwell also undertook immediate remedial measures, including a full review of how the errors occurred and a re-review of all filings in the Prince matter. The firm confirmed that no other AI-related issues were found. It also stated it is "evaluating whether further enhancements to its internal training and review processes are warranted."</p><h2>The Prince Group Case: A High-Stakes Backdrop</h2><p>The underlying bankruptcy matter adds another layer of significance to the incident. The Chapter 15 filing by Prince Global Holdings follows the U.S. Department of Justice's indictment of Chen Zhi, founder and chairman of Prince Group, for wire fraud conspiracy and money laundering conspiracy related to the operation of forced-labor scam compounds across Cambodia. A U.S. DOJ civil forfeiture complaint linked to the case targeted approximately 127,271 Bitcoin — worth approximately $15 billion at the time — described as the largest forfeiture action in DOJ history.</p><p>Sullivan & Cromwell's role as counsel for the Joint Provisional Liquidators in a case of this magnitude makes the AI hallucination admission all the more striking. The firm is responsible for representing liquidators overseeing 30 BVI-incorporated Prince Group entities in a proceeding that sits at the intersection of international insolvency law, crypto asset recovery, and U.S. criminal enforcement.</p><h2>The OpenAI Irony</h2><p>Legal commentators were quick to note a pointed irony in the incident: Sullivan & Cromwell also advises OpenAI on the "safe and ethical deployment" of artificial intelligence, according to Above the Law. The firm that counsels one of the world's leading AI companies on responsible AI use failed to follow its own internal AI governance protocols in a federal court filing.</p><p>Dietderich did not shy away from the gravity of the situation. "The Firm and I are keenly aware of our responsibility to ensure the accuracy of all submissions including under Local Bankruptcy Rule 9011-1(d), and I take responsibility for the failure to do so," he wrote in his April 18 letter.</p><p>He also offered a plain-language definition of what went wrong: "'Hallucinations' are instances in which artificial intelligence tools fabricate case citations, misquote authorities, or generate non-existent legal sources." And in a statement that will resonate far beyond this single case: "We deeply regret that this has occurred."</p><h2>A Growing Crisis Across the Legal Industry</h2><p>Sullivan & Cromwell's admission does not exist in a vacuum. Legal technologist Damien Charlotin maintains a publicly available running database of AI hallucination incidents in court filings. That database has recorded 1,334 incidents globally, with more than 900 in the United States alone — a figure that underscores just how widespread the problem has become since generative AI tools became broadly accessible to legal professionals.</p><p>U.S. judges have in some cases sanctioned attorneys for submitting AI-generated research without independently verifying its accuracy. The Sullivan & Cromwell case is notable, however, because it involves one of the most prestigious and well-resourced law firms in the world — a firm that, by its own account, had already developed comprehensive AI governance infrastructure, including mandatory training requirements, before this incident occurred.</p><p>That distinction matters. Earlier AI hallucination cases in court filings often involved smaller firms or solo practitioners who may not have had formal AI policies in place. The Sullivan & Cromwell incident demonstrates that policy development alone is insufficient. Policies that are not enforced, or that break down under deadline pressure or in high-volume litigation environments, provide no practical protection against the risks of unverified AI output.</p><p>The case is also a reminder of the asymmetric risk profile of AI hallucinations in legal work. A fabricated case citation or misquoted statutory text can undermine an entire legal argument, expose a firm to sanctions, damage client interests, and — as in this case — require a public apology to a federal judge. The reputational and professional costs are significant, regardless of the firm's size or prestige.</p><p>Sullivan & Cromwell's FTX connection adds further context: the firm previously represented crypto exchange FTX in its own high-profile bankruptcy case, giving it deep experience in complex insolvency proceedings where precision and accuracy in court filings are non-negotiable.</p><h2>What Happens Next</h2><p>As of April 22, 2026, it is not yet clear whether Chief Judge Martin Glenn will impose sanctions on Sullivan & Cromwell or take other disciplinary action. The firm's proactive disclosure, the filing of a detailed Schedule A with corrections, and the confirmation that no other AI-related errors were found in the Prince matter may be factors that weigh in its favor. However, the decision rests with the court.</p><p>More broadly, the incident is likely to prompt renewed scrutiny of AI governance practices across the legal profession. Whether it accelerates regulatory action — from bar associations, federal courts through local rule amendments, or other oversight bodies — remains to be seen. What is clear is that the Sullivan & Cromwell case will become a reference point in ongoing discussions about how law firms should structure, enforce, and audit their AI use policies.</p><p>For legal professionals, the case delivers an unambiguous message: having an AI governance policy is not the same as having an AI-safe practice. The gap between written policy and actual compliance — especially under time pressure or in complex, multi-document filings — is where hallucinations find their way into the record.</p><p>For more tech news, visit our <a href=\"/news\">news section</a>.</p><h2>Why This Matters Beyond the Courtroom</h2><p>The Sullivan & Cromwell AI hallucination case is a high-visibility example of a challenge that extends well beyond the legal industry: the gap between AI policy and AI practice. Whether you're a knowledge worker relying on AI-generated summaries, a professional using AI to draft documents, or an organization deploying AI tools at scale, the core lesson is the same. Verification discipline — the habit of independently checking AI-generated output before it is acted upon — is not optional. It is the foundational skill of the AI-augmented professional. Staying informed about how AI tools are reshaping professional practice, and where they continue to fall short, is essential for anyone whose work depends on accuracy. Join the <a href=\"/#waitlist\">Moccet waitlist</a> to stay ahead of the curve.</p>", "excerpt": "Sullivan & Cromwell, one of Wall Street's most prestigious law firms, issued a formal apology to a federal bankruptcy judge on April 18, 2026, after AI hallucinations produced approximately 40 errors in court filings related to the Prince Global Holdings Chapter 15 bankruptcy case. The firm's own AI governance policies — which require lawyers to 'trust nothing and verify everything' — were not followed, and the errors were caught not by the firm itself, but by opposing counsel at Boies Schiller Flexner. The incident is being described as one of the highest-profile admissions of AI hallucinations in a major law firm's court filing to date.", "keywords": ["AI hallucinations", "Sullivan & Cromwell", "legal AI risks", "generative AI in law", "bankruptcy court filing errors"], "slug": "sullivan-cromwell-ai-hallucinations-bankruptcy-court-filing" } ```

Share:
← Back to Tech News