Built by someone who's
been on both sides
Matthew Keeley founded RiskReview.AI after spending years building AI systems and then breaking them. He saw the gap between what organizations needed and what they were getting.
Matthew Keeley
Founder & Principal Security Engineer
Offensive security & AI systems
AI risk assessment for regulated industries
Evidence-first, engineering-led
The Beginning
Matthew started his career in offensive security, working on red teams and penetration testing for Fortune 500 companies. He spent years finding vulnerabilities in traditional software systems: web applications, APIs, infrastructure. The playbook was well-established: test authentication, check for injection flaws, validate access controls, probe for misconfigurations.
Then he moved into building AI systems. He worked on ML platforms, LLM-powered applications, and AI infrastructure for regulated industries. He saw firsthand how AI systems introduced entirely new classes of risk: prompt injection, model extraction, training data poisoning, adversarial examples. These were risks that traditional security assessments completely missed.
The Problem
When organizations needed to prove their AI systems were secure and compliant, they turned to consultants who understood compliance frameworks but not AI systems. Or they turned to AI companies who understood models but not security. The result was the same: assessments that checked boxes but didn't actually test anything.
Matthew watched procurement teams at financial institutions struggle to evaluate AI vendors. He saw healthcare organizations deploy AI systems without understanding the attack surface. He watched legal teams ask for "AI compliance certifications" and receive documents that were essentially marketing materials.
The gap was clear: there was no one doing adversarial security testing specifically for AI systems, with the rigor that regulated industries required, and the evidence that procurement and compliance teams needed.
Why RiskReview.AI
RiskReview.AI exists because Matthew believed there should be a third option: security engineers who understand both AI systems and how to break them, who can provide the evidence-backed assessments that regulated industries actually need.
The approach is straightforward: treat AI risk assessment like penetration testing. Test the actual system, not the documentation. Find real vulnerabilities, not theoretical ones. Provide evidence, not opinions. Give organizations the technical certainty they need to make decisions about AI adoption, vendor selection, and regulatory compliance.
Every assessment follows the same methodology: adversarial testing, evidence-backed findings, transparent scoring, and actionable remediation guidance. No compliance theater. No vendor lock-in. Just the technical rigor that organizations deploying AI in regulated environments actually need.
The Approach
RiskReview.AI doesn't sell tools, platforms, or implementation services. The credibility of every assessment depends on objectivity. If we're going to tell a client their AI system has vulnerabilities, we need to be able to prove it, and we need to have no financial incentive to find problems that don't exist.
This is security engineering, not consulting. Every finding is backed by evidence. Every score is backed by a transparent rubric. Every recommendation is actionable. The goal isn't to create dependency. It's to give organizations the information they need to make their AI systems more secure, more compliant, and more trustworthy.
Security engineering
meets AI systems
Offensive Security
Years of red team experience testing enterprise systems, finding vulnerabilities, and helping organizations understand their real security posture.
AI Systems Engineering
Built ML platforms and LLM-powered applications for regulated industries, understanding both how these systems work and where they fail.
Regulated Industries
Experience working with financial services, healthcare, and legal organizations that need evidence-backed security assessments.
Independent Assessment
No vendor relationships, no tool sales, no implementation services. Objectivity is the foundation of credible security assessment.
Ready to assess
your AI systems?
Get an independent, evidence-backed assessment of your AI systems' security, compliance, and risk posture.
Request a Review