Rule 11 and the duty of candor were enough. When lawyers in Mata v. Avianca submitted briefs full of fake citations invented by ChatGPT, the court didn't need a special "AI rule." It used the one that was already there. State bars and the ABA are doing the same: they're not building a parallel ethics universe for AI. They're saying the existing rules apply. Your use of generative AI has to fit inside them. For law firms using AI in research and discovery, the live question is what you have to do so that use doesn't blow up privilege, trigger sanctions, or put client confidences at risk.
One Framework, Many Voices
In July 2024 the ABA's Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, the first comprehensive national ethics guidance on lawyers' use of generative AI. It doesn't add new obligations. It maps existing Model Rules (competence, confidentiality, communication, fees, candor, supervision) onto AI use. You still have to be competent, protect client information, communicate with the client, bill fairly, be candid with the tribunal, and supervise people and tools. The opinion's contribution is to spell out what that means when the "tool" can hallucinate cases, retain prompts, or feed data to third parties.
State bars have been moving in the same direction, often ahead of or in parallel with the ABA. Florida was early: its Board of Governors adopted Ethics Advisory Opinion 24-1 in January 2024, explicitly allowing generative AI but tying that permission to confidentiality safeguards, competence, billing integrity, and supervision. California's State Bar had already published practical guidance in late 2023 and has continued to treat it as a living document. North Carolina adopted Formal Ethics Opinion 2024-1 in November 2024; the New York City Bar issued Formal Opinion 2024-5 in August 2024. The details vary by jurisdiction, but the theme is consistent: you can use AI, but you remain responsible for the output, the data you put in, and the way you supervise its use.
"State bar AI ethics rules are tightening" doesn't mean a wave of new black-letter rules. Bars are making explicit what was always implicit. Examiners and disciplinary counsel will have clear citations when they ask whether you verified AI-generated research or put confidential client data into a consumer chatbot.
Research: Verification Isn't Optional
Mata v. Avianca is the canonical warning. Attorneys used ChatGPT to draft court submissions; the model fabricated case names, quotes, and citations. The lawyers didn't verify. When the court and opposing counsel pressed, the lawyers were slow to own the error. The court imposed sanctions, including a $5,000 penalty and a requirement to notify the judges whose names had been falsely attached to non-existent opinions. The court was clear: using AI isn't improper by itself, but you have a gatekeeping duty. For research and memos, that means every citation and every case proposition has to be checked before it reaches a filing or a client.
The ABA and state opinions spell this out under competence and candor. You don't have to be an AI expert, but you do have to understand that generative AI is probabilistic. It can sound right and be wrong. The workflow has to assume the model can hallucinate. For research, that implies: run AI-assisted research as a draft step, then verify every citation and key holding in a trusted source (Westlaw, Lexis, or the court's own docket). If you can't verify it, you don't cite it. Some firms are layering AI with traditional research platforms that anchor results to real authority; others are using AI for brainstorming or issue-spotting and reserving citation and holding-check to humans and verified databases. Either way, the ethics standard is the same: the lawyer signs the work, so the lawyer owns its accuracy.
Discovery and Confidentiality: Where the Rules Bite Hardest
Research is one risk vector. Discovery and client data are another, and here the recent case law is stark. In United States v. Heppner, in the Southern District of New York, the defendant had used a consumer version of Claude to analyze his legal exposure and prepare materials after receiving a grand jury subpoena. He later shared those materials with his counsel and asserted attorney-client privilege and work product. Judge Rakoff held that the AI-generated materials were not protected. The reasoning cut across the usual privilege tests: the communications were with the AI platform, not with a lawyer; the platform's terms allowed retention and disclosure of user data, so there was no reasonable expectation of confidentiality; and the defendant had used the tool on his own, not at counsel's direction. Sharing the output with counsel afterward didn't retroactively cloak it.
For law firms, the implication is direct. If a lawyer or a client puts confidential or privileged information into a consumer or general-purpose generative AI product, they may be waiving privilege and breaching confidentiality. Many bar opinions now tell lawyers to check data retention, use policies, and whether the provider uses inputs for training or shares them with third parties. For discovery work (reviewing documents, summarizing depositions, drafting discovery requests), putting client data or work product into such a system without robust contractual and technical safeguards is a serious risk. The safe move is to use enterprise or counsel-directed tools with clear confidentiality and no-training commitments, and to treat consumer chatbots as off-limits for any matter-specific or client-identifying information.
Supervision and Policy: Making It Stick
Bar guidance consistently ties AI use to supervision (Model Rules 5.1 and 5.3). The lawyer is responsible for the work of associates, paralegals, and contract lawyers, and for the tools they use. "The associate used ChatGPT and I didn't know" is not a defense. Firms need a policy that says what's allowed (e.g., which tools, for what tasks), what's required (verification of research, no client data in consumer AI), and who is accountable. Training isn't optional in the sense that competence requires understanding limitations; many opinions recommend that lawyers understand the basics of how the tools work and where they fail.
From a risk perspective, the policy should also address discovery and confidentiality explicitly: no input of confidential or privileged information into systems that don't guarantee confidentiality and don't commit not to use data for training. That may mean whitelisting specific enterprise or legal-tech products and prohibiting consumer-grade use for anything beyond de-identified or hypothetical inputs.
What to Do Now
The tightening isn't a single new rule. It's the convergence of bar opinions, court decisions, and examiner expectations. For law firms using AI for research and discovery, the checklist is straightforward: align with your jurisdiction's bar guidance (and the ABA's Opinion 512 as a baseline), treat every AI-generated citation or legal proposition as unverified until checked, and assume that putting client or matter data into consumer or unclear AI systems risks waiver of privilege and breach of confidentiality. Restrict discovery and client work to tools and workflows that protect both. Put in place a firm-wide policy and training so supervision is real, not theoretical.
The rules were always there. The bars and courts are now making clear they apply to how you use AI.
We help law firms and professional services design AI policy, risk assessment, and compliance controls. Get in touch.