A federal lawsuit raises a fundamental question for the legal profession: when does an AI tool move from providing legal information to effectively perform the work of a lawyer? 

Executive summary 

Nippon Life Insurance Company of America has sued OpenAI in federal court in Chicago (Northern District of Illinois), alleging that ChatGPT crossed the line from general legal information into unlicensed legal assistance that functioned as attorney work product and allegedly helped a former disability claimant attempt to reopen a settled case and to generate numerous court filings. OpenAI has publicly responded that the complaint “lacks any merit whatsoever.” 

This case matters less because it is “AI practicing law” as a slogan, and more because it spotlights a structural shift: large language models (LLMs) are collapsing longstanding procedural gatekeeping in the legal system by making procedural know-how cheap, fast, and accessible to non-lawyers. This is simultaneously promising (in terms of access to justice) and dangerous (through hallucinated citations, privacy leaks, and burdensome “docket flooding”). 

For legal-tech buyers and compliance teams, the key takeaway is not “ban AI,” but “separate legal advice from evidence intelligence.” Tools that operate in regulated workflows should be designed to preserve human accountability, maintain audit trails, and ensure outputs are verifiable (source-linked) and defensible, particularly in litigation and claims-management contexts. 

 

Key Takeaways 

  • The Nippon Life v. OpenAI lawsuit may become the first major U.S. case to test whether generative AI tools constitute the unauthorized practice of law (UPL). 
  • AI tools are lowering the barrier for self-represented litigants (pro se litigants) to navigate complex legal procedures. 
  • However, hallucinated citations, fabricated case law, and incomplete or truncated document analysis create real litigation and compliance risks. 
  • The safest model for legal AI is evidence intelligence and workflow support, not autonomous legal advice. 

 

This article is for informational purposes only and does not constitute legal advice. 

 

What happened in Nippon Life v. OpenAI 

According to Reuters reporting, Nippon Life Insurance Company of America alleges that a former disability claimant, after settling her long-term disability dispute with prejudice, uploaded communications into ChatGPT, whichallegedly validated her concerns and suggested further legal action, fired her attorney, and then used ChatGPT to aid in drafting pro se filings seeking to reopen the settled matter and continue litigating.  

 

Nippon Life has alleged that these filings served “no legitimate legal or procedural purpose” and imposed real defense costs, and has sought a declaration that OpenAI violated Illinois’s unauthorized practice of law statute, approximately $300,000 in compensatory damages, and $10 million in punitive damages. 

 OpenAI’s Response 

Reuters reporting and subsequent legal press coverage note OpenAI’s position that the complaint “lacks any merit whatsoever.” Interestingly, but not surprisingly, the former claimant herself is not named as a defendant, an early signal that the dispute is strategically focused on platform accountability and potential institutional liability rather than user misconduct. 

Timeline of key events

Unauthorized Practice of Law: Where AI Blurs the Line 

Unauthorized practice of law (UPL) is predominantly regulated at the state level in the U.S., and there is no single national definition of the “practice of law.” Jurisdictions vary in how they draw the line between permissible legal information (such as legal education, forms, procedural guidance) and impermissible legal advice (including tailored recommendations, representation, or drafting that implies the application of professional legal judgment). 

Illinois-specific anchor: the Attorney Act 

Illinois’ Attorney Act states: “No person shall be permitted to practice as an attorney or counselor at law within this State” without a license and also restricts receiving compensation for legal services and holding oneself out as providing legal services when unlicensed. Nippon Life’s theory (as described in the reporting) tests how that framework applies when the “assistant” is not a person at all, but an AI system deployed by a technology company and made available to the public to generate legal-like outputs or guidance. 

“Non-human actors” aren’t new to UPL enforcement. 

While an LLM is not a human, the market-facing actor remains a technology vendor, and UPL disputes have long included technology-enabled or software-based delivery models. 

 

LegalZoom is a canonical example: it litigated with the North Carolina State Bar over whether its interactive document-generation service constituted the unauthorized practice of law, culminating in extensive proceedings and a consent-based resolution framework in North Carolina. The key point is not the outcome; it is that regulators and courts have already treated technology platforms delivering legal assistance or automated document services as a potential unauthorized practice of law. 

 

DoNotPay is another modern reference point. The FTC finalized an order requiring DoNotPay to stop deceptive “AI lawyer/robot lawyer” claims and noted (among other things) allegations that it failed to test whether its “AI lawyer” performed like a human lawyer and did not retain attorneys to test the quality and accuracy of law-related features. Even outside the traditional unauthorized-practice-of-law doctrine, consumer protection law and deceptive marketing enforcement are a real pathway when legal-like claims are marketed to the public. 

 

Regulatory reform is already reshaping the gate. 

Two state-led experiments underscored that “who can deliver legal help” is not static: 

Arizona’s Alternative Business Structures (ABS) framework clearly intends to expand access to legal services by allowing entity structures that include nonlawyers with economic interests or decision-making authority, subject to Arizona Supreme Court rules. Utah’s legal regulatory sandbox, a program launched by the Utah Supreme Court and housed within the Utah State Bar, was designed to test whether changing practice regulation can increase access without increasing consumer harm. A scholarly review of Arizona and Utah’s “entity regulation” reforms, five years after their implementation, highlights them as meaningful departures from traditional UPL and ownership constraints in the legal profession’s regulatory structure. 

 

This matters for Nippon Life v. OpenAI because it suggests a likely future: courts and regulators won’t just ask whether AI crossed a line, they may also ask what a safer, licensed, and supervised regulatory lane for AI-enabled legal assistance should look like. 

 

The Bigger Story: AI Is Collapsing Procedural Gatekeeping in Law 

To understand why this dispute matters, start with the justice gap: LSC reports that low-income Americans received no or inadequate legal help for 92% of substantial civil legal problems, and that these problems are widespread. Courts and bar organizations have recognized that large numbers of people show up without lawyers, often because they cannot afford or access counsel.  

This “justice gap” is a big driver behind the rapid growth of AI legal tools designed to help individuals understand legal processes without hiring an attorney. As a result, courts, policymakers, and legal technology developers are increasingly exploring tools that help individuals understand legal procedures, prepare documents, and navigate court systems without traditional legal representation. 

“Gatekeeping” in law is not just about elite credentialing, but also about navigating a complex procedural system: deadlines, forms, service rules, evidentiary architecture, and motion practice. Self-represented litigants frequentlystruggle with process rules and court language, practical barriers that can determine case outcomes regardless of underlying merits. 

 

Why Generative AI Is Changing Self-Representation in the Legal System 

This transformation is rooted in how AI reshapes the sphere of self-representation. Several aspects of AI’s capabilities make these changes possible: 

First is search and comprehension: conversational interfaces can translate legal instructions into plain language and provide contextual explanations (although these outputs still require careful verification).  

Second is drafting: Large language models can produce plausible first drafts of legal motions, demand letters, discovery responses, and procedural filings.  

Third is iteration speed: a motivated pro se litigant can generate, refine, and file at volumes that were previously impractical. 

Importantly, the access-to-justice ecosystem has been building toward this for years via non-generative tools. A2J Author, an established document assembly platform used by courts and legal aid organizations, has helped millions of people generate millions of legal documents, illustrating that structured self-help can scale when built with guardrails and constrained outputs. 

The key takeaway from ‘gatekeeping collapse’ is not that lawyers become irrelevant, but that their value shifts. Lawyers will increasingly focus on strategy, professional judgment, ethical obligations, negotiation, and evidentiary rigor as courts adapt to the growing volume of AI-powered procedural work. 

 

The Hard Counterpoint: Access vs. Abuse 

The critical takeaway from Nippon Life’s allegations is that AI can lower procedural barriers for all litigants, enabling both greater access and potential abuse. Expect this tension to direct future policy paths, with some jurisdictions tightening restrictions and others formalizing supervised or regulated legal-help pathways. 

 

A near-term signal is legislative activity: New York Senate Bill S7263 would prohibit proprietors of AI chatbots from permitting “substantive responses” that would constitute unauthorized practice (or unauthorized use of a professional title) if made by a natural person. Whether or not it passes, it reflects the direction of travel: regulators are moving from ethics guidance and professional rules toward explicit statutory liability frameworks. 

 

The Technical Risks That Turn Legal AI from Helpful to Harmful 

The same properties that make LLMs powerful (fluent generation, implicit pattern matching) also create failure modes that are unacceptable within legal contexts. 

 

Hallucinations and fabricated citations 

Courts have already sanctioned lawyers who filed AI-generated citations to nonexistent cases. Mata v. Avianca is the best-known example: the sanctions order explains that fabricated citations are not “existing law” and that Federal Rule of Civil Procedure 11 requires attorneys to verify factual and legal assertions before filing. 

 

Empirical research shows that even specialized legal AI systems marketed as citation-grounded tools are not hallucination-free. A preregistered evaluation by Magesh et al. found that Lexis+ AI and Thomson Reuters AI-assisted tools hallucinated at rates of approximately 17%-33% in their study, despite lower rates than those of general-purpose chatbots. The implication for buyers: “legal-specialized” does not mean “legally reliable by default.” 

 

Long-context failure and truncation risk 

Litigation and claims work commonly involves extensive records. Even when models can accept long contexts, they may fail to reliably use information in the middle. The “Lost in the Middle” research shows that performance can degrade considerably depending on where relevant information appears in long inputs—often worse near the beginning or end, and better in the middle. 

In practical terms, that means “upload all the correspondence + ask for a conclusion” can yield confident answers that silently ignore critical facts, a pathway from “helpful tool” to potentially misleading legal output. This is especially dangerous in litigation workflows where critical facts may be buried inside thousands of pages of medical records, deposition transcripts, or claim files. 

 

Data privacy and confidentiality 

Uploading legal correspondence, claim files, or medical records to consumer AI tools can expose privileged or regulated information. OpenAI’s policy materials state that, for consumer services, OpenAI may use your content for model training, with opt-out options; by contrast, business/enterprise offerings emphasize that models are not trained on business data by default. 

 

For legal work, the compliance question is rarely “is there an opt-out?” The question is: Is the default contractual framework safe for privileged, confidential, or regulated data? And can you prove governance (including access logs, usage controls, and auditability) later? 

 

A Risk Management Lens is Converging on GenAI. 

NIST’s AI Risk Management Framework (AI RMF 1.0) and its Generative AI profile provide organizations with an organized approach to mapping, measuring, and managing AI risk across the lifecycle, a useful context for compliance officers evaluating AI deployments in regulated or legally sensitive environments. 

 

Designing Defensible AI for Regulated Legal Workflows 

The legal question in Nippon Life v. OpenAI (“information vs. advice”) ultimately becomes a product question: what does your system allow, what does it discourage, and what does it prove? 

 

Recommended guardrails 

A defensible design pattern is to treat AI as an evidence intelligence and drafting assistant, never as an autonomous advisor: 

  • Use role-aware experiences: professional workspaces should operate under different permissions, safeguards, and governance rules than public consumer interfaces. 
  • Make outputs verifiable: require source citations, document links, and traceability to underlying evidence rather than generating unsupported conclusions. 
  • Introduce friction for high-risk actions: requests for case-specific advice or filings should trigger disclaimers or “licensed professional review required” workflows. 
  • Maintain audit trails: regulated legal teams require logs, version history, and governance controls for defensibility. 

ABA Formal Opinion 512 reinforces why this matters: lawyers must consider duties of competence, confidentiality, communication, supervision, and meritorious claims when using generative AI tools. Those professional obligations do not disappear simply because “the assistant” is software. 

 

Flowchart: a guardrail “legal AI” product loop 

Platform selection table 

The table below is deliberately buyer-centric: it focuses on risk characteristics (hallucination, auditability, privacy posture, intended use) rather than marketing claims. 

 

Platform type (examples) Accuracy posture Hallucination / fabrication risk Auditability & provenance Intended use in legal settings 
Consumer general-purpose chatbots (e.g., ChatGPT personal plans; other consumer LLMs) Variable; not legal-grade Meaningful risk; fabricated citations have produced court sanctions in real cases Often limited for litigation defensibility unless wrapped with governance tooling Education, brainstorming, plain-language explanation—not filings or legal advice 
Enterprise / business GenAI platforms (e.g., ChatGPT Enterprise/Business; API + governance) Variable; depends on grounding and controls Still non-zero; long-context failures can omit critical facts Stronger admin controls; enterprise privacy and compliance APIs can support audits Internal productivity with strict governance; still requires human verification 
“AI legal research” copilots (e.g., Lexis+ AI; Westlaw AI-Assisted Research; Ask Practical Law AI) Better grounded, but not error-free Empirically observedhallucination rates remain material Typically better citation scaffolding than generic chatbots Research acceleration only; lawyers must verify and cite primary sources 
Guided interviews / document assembly (e.g., A2J Author) High reliability within defined templates Low generativehallucination risk (not free-form generation) Strong process constraints and reproducibility Court/aid-driven self-help and form completion—structured, narrow scope 
Litigation-grade, domain-specific evidence intelligence (example: VerixAi for medico-legal evidence workflows) Designed for traceability in evidence handling Risk reduced via source-linking + constrained workflows (still requires human review) Evidence-linked outputs + audit-trailed collaboration, plus HIPAA/SOC2 compliance claims Professional litigation workflows: evidence review, chronology, defensible work product—not autonomous legal advice 

 

Note: If you need a platform-by-platform scorecard for a specific shortlist, treat that as an evaluation project: define the use cases, measure error rates on your documents, and require auditability demonstrations. Research suggests vendor “hallucination-free” claims can be exaggerated. 

 

Designing AI for Regulated Legal Work  

The lesson from Nippon Life v. OpenAI is not that artificial intelligence should be excluded from legal work. The real lesson is that AI must be designed for use within regulated professional environments that require accountability, traceability, and oversight. 

 

The legal profession does not need another generic “answer engine.” What it needs is defensible evidence intelligence, systems that help professionals analyze facts, navigate complex records, and produce work that is verifiable, aource-supported, and professionally accountable. 

 

Platforms such as VerixAi reflect this emerging model. Rather than generating legal advice, VerixAi is designed to help legal and medical-legal professionals analyze complex evidence, extract structured facts, and generate source-verifiable work product within secure, auditable, and compliant workflows. 

 

In litigation, medical record review, and expert analysis, the value of AI is not replacing professional judgment. The value lies in reducing review time while simultaneously increasing transparency, traceability, and defensibility of the analytical process. 

 

As courts and regulators begin to define the boundaries of legal AI, systems built around auditability, verification, and professional oversight are likely to become the most trusted architecture for AI in regulated professions. 

 

 

Frequently Asked Questions About AI and the Unauthorized Practice of Law 

 

Can AI legally provide legal advice? 

In most U.S. jurisdictions, providing legal advice without a license may constitute the unauthorized practice of law (UPL). AI tools can provide general legal information, but offering case-specific legal recommendations, strategic guidance, or representation-like assistance  may raise regulatory issues. 

What is the Nippon Life v. OpenAI lawsuit about? 

The lawsuit alleges that ChatGPT helped a disability claimant generate pro se filings to reopen a previously settled case. The plaintiff claims this constitutes unauthorized legal assistance  delivered through an AI system operated by a technology provider. 

Can AI help people represent themselves in court? 

AI tools can help individuals understand legal procedures, draft documents, and organize evidence. However, courts and regulators continue to evaluate how far these tools can go before they are considered legal advice rather than general legal information or procedural assistance. 

What are the risks of using AI in legal work?  

Major risks include: 

  • hallucinated case law 
  • fabricated or inaccurate citations 
  • incomplete document analysis or missed facts 
  • Privacy, confidentiality, and attorney-client and privilege exposure 

 

Because of these risks, AI should typically be used as a drafting, research, and evidence analysis tool rather than an autonomous legal advisor. 

How should law firms use AI safely? 

Best practices include:  

  • requiring qualified human review of all AI-generated outputs 
  • using platforms with source-linked citations and verifiable evidence references 
  • maintaining audit logs, usage tracking, and data governance controls 
  • avoiding consumer AI tools for confidential, privileged, or regulated case materials 

Discover more from CorMetrix

Subscribe now to keep reading and get access to the full archive.

Continue reading