The RiskReview Blog
Practical perspectives on AI risk, security testing, compliance, and governance.
Stay Updated on AI Risk & Compliance
Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.
- Read
Risk Classification · AI Governance · Use Cases · Screening
Red Light, Yellow Light, Green Light: How to Classify AI Use Cases by Risk
Not every AI application needs the same level of oversight. How to build a risk-tiered classification system: prohibited uses, cautious uses requiring human review, and standard uses. Plus the screening process that routes each new use case to the right level of scrutiny.
- Read
Healthcare AI · FDA · HIPAA · State Disclosure · Compliance
Healthcare AI Under Dual Pressure: FDA Engagement, HIPAA Intersections, and State Disclosure Laws
Healthcare AI sits at the intersection of three regimes: FDA device oversight, HIPAA's privacy and security rules, and a growing patchwork of state disclosure laws. They ask different questions—safety, PHI, transparency—and they don't line up. What that means in practice.
- Read
Insurance · AI Governance · Risk Management · Verisk · ISO · Renewal
Verisk's New AI Exclusion Forms: What Changed on January 1, 2026 and How to Negotiate Your Renewal
ISO's new CGL and products liability exclusions for generative AI went live January 2026. What the forms actually do, why carriers are adopting them (and the 'absolute' variants), and how to push back at renewal—definitions, lead-in language, carve-backs, and why your D&O and E&O may be next.
- Read
Security · RAG · Vector Database · Embeddings · LLM Security · OWASP · Poisoning
Five Poisoned Documents Can Manipulate Your RAG System 90% of the Time
RAG poisoning through retrieval manipulation—how a single optimized document dominates results, why vector embeddings aren't the safe proxy teams assumed, and OWASP LLM08:2025 Vector and Embedding Weaknesses.
- Read
Security · AI Supply Chain · AI-BOM · NDAA · Model Security · Data Poisoning
AI Supply Chain Security: From Training Data Provenance to Model Weight Integrity
The DoD's NDAA now requires AI-BOMs (AI Bills of Materials). Poisoned repositories (Basilisk Venom), backdoored fine-tuning datasets, compromised model weights, and how to apply software supply chain thinking to ML pipelines.
- Read
Security · Prompt Injection · OWASP · LLM Security · Defense in Depth
Prompt Injection Is Still OWASP's #1 LLM Risk in 2026 — And It's an Architectural Problem, Not a Filter Problem
Why prompt injection persists despite two years of defenses: the fundamental ambiguity between instructions and data in LLMs, why guardrails reduce but never eliminate risk, and what defense-in-depth actually looks like.
- Read
AI Governance · NIST CSF · ISO 27001 · GRC · Integration
Integrating AI Governance Into Your Existing Security and Compliance Programs (Not Building a Parallel One)
AI risks belong in the enterprise risk register. AI controls belong in the security program. AI documentation belongs in the audit trail you already maintain. How to layer AI governance onto NIST CSF, ISO 27001, and your existing GRC tooling instead of creating a standalone program nobody maintains.
- Read
Continuous Monitoring · Model Drift · AI Operations · Telemetry
Continuous Monitoring for AI Systems: What to Watch After Deployment
Model drift, output quality degradation, permission creep, data pipeline changes, RAG index updates, and behavioral anomalies. How to build an ongoing monitoring program that catches problems before they become incidents: telemetry to collect, thresholds to set, and who gets alerted.
- Read
SEC · AI-Washing · Disclosure · Enforcement · Securities · AI Governance
AI-Washing Is the New Greenwashing: How SEC Enforcement Actions Are Targeting Inflated AI Claims
The SEC is treating inflated AI claims like greenwashing—same disclosure logic, same enforcement bite. From investment advisers to public companies, here's what the agency is actually charging and what it means for how you describe AI in filings and marketing.
- Read
Fair Lending · AI Governance · Fintech · Disparate Impact · Compliance · Massachusetts · ECOA
AI in Lending Decisions: The $2.5M Massachusetts Settlement and What It Means for Fintech Compliance
Massachusetts settled with Earnest Operations over AI underwriting that harmed Black, Hispanic, and non-citizen applicants. The settlement is a state-level fair-lending shot across the bow—and a practical map of what to test, what to document, and what to remove from your models.
- Read
D&O · E&O · Management Liability · AI Exclusions · Insurance · AI Governance · Risk
D&O, E&O, and AI: The Exclusions Creeping Into Management Liability Policies
Insurers are inserting broad AI exclusions into D&O, E&O, and management liability—not just cyber. Here's what the language actually does, why it's showing up now, and what to do at renewal.
- Read
DOJ · AI Litigation Task Force · State AI Laws · Preemption · Colorado AI Act · Federalism
The DOJ AI Litigation Task Force: Which State Laws Are Most Likely to Be Challenged
The Task Force is charged with challenging state AI laws that conflict with federal policy. Colorado’s algorithmic-discrimination regime is in the crosshairs; California’s employment rules and Texas’s TRAIGA are next in line. Here’s the pipeline, the legal theories, and what actually has to happen before any law falls.