Healthcare AI answers to several regulators, and they're not coordinating. Deploy or build AI that touches diagnosis, treatment, documentation, or clinical decision support and you're in the crosshairs of FDA device policy, HIPAA's Security and Privacy Rules, and (depending on where your patients and providers sit) state laws that mandate when and how you tell people AI is in the room. Each regime cares about something different. Getting one right doesn't get you the others. The challenge: build one program that satisfies all three without running three separate compliance tracks.
FDA: Safety and Lifecycle, Not Just Clearance
The FDA has been clarifying how it thinks about AI in medical devices for years. The big shift recently is that the agency is treating AI-enabled device software as something that changes—and that those changes need to be planned, not ad hoc. In December 2024 the agency finalized guidance on Predetermined Change Control Plans (PCCPs) for AI-enabled device software. The idea: you describe in advance the kinds of modifications you expect to make (e.g., retraining on new data, algorithm updates), how you'll validate them, and how you'll assess impact. If the plan is accepted as part of your marketing submission, you can implement those changes within the plan's bounds without a new 510(k) or PMA each time. That matters because the old model—submit every time the model updates—was a poor fit for iterative ML. PCCPs don't remove oversight; they make it continuous and pre-agreed.
Then in January 2025 the FDA put out draft guidance on lifecycle management for AI-enabled device software. This is the first document that explicitly addresses the full lifecycle: design, development, validation, deployment, and postmarket monitoring. It emphasizes that AI performance can drift as input data and populations shift, so postmarket performance monitoring isn't optional. It also pushes on transparency and bias—sponsors need to show they've considered and addressed bias-related risks. Comment period closed in April 2025; expect a final version that shapes what sponsors have to document and demonstrate. For anyone building or integrating AI that could be a device (diagnostic support, treatment recommendation, clinical decision support), the message is clear: the FDA is moving from "clear it once" to "show us how you'll manage it over time."
None of that tells you whether you can send PHI to a cloud model or what you have to say to the patient. HIPAA and the states fill that gap.
HIPAA: No "HIPAA-Certified AI," Only How You Use It
HIPAA compliance is the state of your operations: how you handle PHI, who has access, what you've contracted with vendors, and how you respond when something goes wrong. There is no product sticker. Any system that creates, receives, maintains, or transmits PHI is in scope. If your AI scribe, decision-support tool, or analytics pipeline touches identifiable health information, you need BAAs with vendors, Security Rule–aligned controls (access, encryption, audit logging, risk assessment), and a clear story on where data goes and who can see it. The proposed Security Rule updates (e.g., elevating some "addressable" standards to "required" and tightening encryption expectations) raise the bar.
Two nuances trip people up. Training on PHI: If you or your vendor trains models on patient data, that's PHI use. De-identification is either a formal expert determination that re-identification risk is minimal or the Safe Harbor removal of 18 specified identifiers, not "remove the name." Federated learning, synthetic data, or properly de-identified datasets with documentation are the paths that don't assume "we'll just strip names." Consent and disclosure: HIPAA governs use and disclosure of PHI; it doesn't by itself require you to tell the patient "an AI helped write this note" or "an AI assisted in this decision." State law has stepped in there.
Enforcement is real. OCR's risk-analysis initiative has driven multiple settlements. And in November 2025, Sharp HealthCare was hit with a proposed class action over its use of an ambient AI scribe: the suit alleges that conversations were recorded without consent, and that the resulting medical records contained false boilerplate stating patients had been advised and had consented. Whether the claims prevail or not, the case is a signal. Deploying AI in the exam room without clear consent and without accurate documentation of that consent is a litigation and regulatory target. HIPAA and state privacy/wiretap laws can both be in play.
State Disclosure Laws: Who Must Know, When, and How
States are filling the gap that federal law leaves on transparency to the patient. They're not harmonized.
Texas has gone the furthest so far. Effective January 1, 2026, healthcare providers must make clear, conspicuous, plain-language disclosures when using AI in diagnostic or treatment-related services, at the time care is first provided (with limited emergency exceptions). Disclosure can be via hyperlink but can't use dark patterns. The obligation is explicit and front-loaded: the patient must be told that AI is involved in the diagnostic or treatment pathway.
California has layered two laws. AB 3030 (effective January 2025) requires disclaimers when generative AI is used to produce written, audio, or video clinical communications, including instructions for contacting a human provider. If a licensed healthcare provider reviews the content before it goes out, the disclaimer isn't required. AB 489 (effective October 2025) prohibits AI systems from implying licensed medical oversight where none exists. You can't position an AI as "supervised" when no one with a license is actually in the loop.
Other states are moving in similar directions: disclosure when AI is used in care, prohibitions on AI standing in for licensed judgment in certain contexts (e.g., mental health in Illinois and Nevada), and requirements that humans retain responsibility for medical necessity and that AI cannot be the sole basis for denying care (e.g., prior auth in Arizona, California). The patchwork is growing. If you operate in multiple states, you need a map of who gets what disclosure, when, and in what form.
Where the Three Regimes Overlap (and Where They Don't)
FDA cares whether the device is safe and effective and whether you're managing changes and monitoring performance. HIPAA cares whether PHI is protected and whether you've got the right agreements and safeguards. State disclosure laws care whether the patient is told that AI is in the loop. They can reinforce each other: good documentation of how you use AI (for FDA and internal governance) can support your story for HIPAA and for disclosure. But they can also pull in different directions. A disclosure that satisfies Texas might need to be more prominent than a small-print link; a vendor that's great for FDA-style validation might be vague on BAA and subprocessor flow. You can't assume that "we're FDA-cleared" or "we're HIPAA compliant" covers the state obligation to disclose. And you can't assume that a one-time consent at intake satisfies every state's timing and prominence rules.
The Sharp case is a reminder that consent and documentation matter. Even if a BAA is in place and the vendor is a business associate, recording conversations and generating chart text without clear, accurate consent and without truthful documentation of that consent creates risk under wiretapping laws, state medical privacy laws, and potentially HIPAA if the use or disclosure wasn't properly authorized. The dual pressure isn't just "three checklists." The same deployment has to be defensible on safety (FDA), privacy and security (HIPAA), and transparency and consent (state law). The evidence and controls that support one don't automatically satisfy the others.
What to Do: One Program, Three Outputs
Treat the three regimes as different views of the same deployment, not three separate projects. Build one core program: inventory of AI use cases, risk classification (device vs. non-device, PHI touchpoints, high-risk clinical decisions), vendor and BAA management, and documentation of how you monitor performance and handle changes. From that base you produce what each regime needs. For FDA: lifecycle documentation, PCCP if you're in that world, and postmarket monitoring. For HIPAA: BAAs, risk analysis, Security Rule controls, and incident response. For states: a disclosure strategy that knows which states require what, when disclosure must occur, and what the notice must say—then implement it in intake, consent, and charting so you're not retrofitting later.
If you're in healthcare AI, the dual pressure isn't going away. FDA will keep pushing on lifecycle and transparency; HIPAA will keep focusing on PHI and vendor accountability; states will keep adding disclosure and consent rules. The organizations that do well will be the ones that stopped treating these as separate silos and built one program that feeds all three.
We help healthcare teams align FDA, HIPAA, and state disclosure in one program. Contact us for independent AI risk assessments and compliance program design.