Back to Blog
Fair LendingAI GovernanceFintechDisparate ImpactComplianceMassachusettsECOA

AI in Lending Decisions: The $2.5M Massachusetts Settlement and What It Means for Fintech Compliance

Stay Updated on AI Risk & Compliance

Get notified when we publish new insights on AI risk assessment, regulatory compliance, and security testing.

$2.5 million. That's what Massachusetts extracted from Earnest Operations LLC in July 2025 over AI underwriting that allegedly discriminated against Black, Hispanic, and non-citizen applicants. Earnest denied it and didn't admit wrongdoing (standard for these deals). The assurance of discontinuance still spells out what the state claimed and what the company agreed to change. For fintechs and lenders running algorithms in credit decisions, the takeaway isn't "one bad actor." It's where enforcement is heading: state AGs in fair-lending mode, focused on how you build and test your models, not just intent.

What Massachusetts Said Went Wrong

The AG's office alleged several distinct failures. Read together they're a checklist of what not to do in AI lending.

Cohort default rate as a model variable. Earnest used the federal Cohort Default Rate (CDR), the average loan default rate by school, in its underwriting model. CDR is real and observable. It's also a well-known proxy for race and ethnicity. Black and Hispanic students are disproportionately concentrated at institutions with higher CDRs: HBCUs, HSIs, many regional and for-profit schools. When a model penalizes applicants from high-CDR schools, it systematically penalizes those demographics. The AG alleged that's what happened: Black and Hispanic applicants got worse terms or denials than similarly creditworthy White applicants. CDR feels "objective" because it's published by the Department of Education. That's exactly why it's dangerous. Objective-looking inputs that correlate strongly with protected classes still produce disparate impact. Regulators have flagged this before. The CFPB has warned that using CDR in eligibility and pricing may have a disparate impact on minority students; the FDIC and the old CFPB went after Sallie Mae and Navient in 2014 for using CDR in scoring. Massachusetts is the latest signal: school-level default rates in a black box are a fast path to a consent order.

Immigration status as a knockout. Until 2023, Earnest automatically denied applicants who didn't have a green card, without assessing creditworthiness. That's disparate treatment on the basis of immigration status. ECOA and Reg B don't list citizenship or immigration status as a prohibited basis for all products, but the AG alleged the practice was unlawful under state consumer protection and fair-lending standards. Either way, hard knockouts based on immigration or citizenship with no individualized look are high risk. Many lenders have moved to underwrite non-citizens on the merits; the settlement is a reminder that "no SSN, no loan" is under scrutiny.

Training on human discretion. The AG alleged that Earnest trained its models on historical human decisions that were "arbitrary" and "discretionary." The model learned from underwriter behavior that wasn't itself standardized or justified. Biased or inconsistent past decisions got baked in. Garbage in, garbage out. Lenders that use ML trained on legacy decisions need to ask what those decisions were based on and whether they're a sound basis for automation. If the training set is "whatever underwriters did," without clear policy or fair-lending review, you're carrying forward whatever bias or noise was in the room.

No meaningful disparate impact testing. The state claimed Earnest didn't adequately test its models for disparate impact or mitigate fair-lending risk when it did. The problem wasn't only the choice of variables (CDR) or the knockout rule; it was the absence of a disciplined testing and mitigation process. That aligns with what the CFPB has been saying. In its January 2025 Supervisory Highlights on advanced technologies, the Bureau stated plainly that there is no "advanced technology" exception to ECOA and Reg B. Lenders have to evaluate models for both disparate treatment and disparate impact, document business necessity where impact exists, and consider less discriminatory alternatives. Massachusetts didn't invent that idea; it enforced it at the state level with real money and ongoing reporting.

Adverse action notices. The AG also alleged that Earnest sent inaccurate adverse action notices. Applicants couldn't understand why they were denied or what was driving the decision. When the model is complex and the reasons aren't validated or explainable, you run into both fair-lending risk and FCRA/Reg B risk. The CFPB has called out failures to adequately test and validate the methodology used to generate reasons in adverse action notices when those reasons come from complex models. Massachusetts folded that into the same matter. "Our model said no" isn't enough. You need a defensible way to explain the principal reasons to the applicant.

State Enforcement When Federal Steps Back

The settlement is notable for who brought it. Massachusetts used its own consumer protection and fair-lending authority (including Chapter 93A) to reach a result that federal agencies might not have pursued in the same way at the same time. State AGs have been more active on disparate impact and algorithmic discrimination in lending and housing; the Earnest matter fits that pattern. For fintechs and banks, "we're compliant with the CFPB" isn't a complete answer. State AGs can use state law to target the same conduct, and they may be more willing to push on disparate impact and model design than the current federal posture suggests. A multi-state strategy (knowing which states are aggressive on fair lending and AI) is part of compliance now.

What the Settlement Requires Going Forward

Earnest agreed to pay $2.5 million and to adopt a set of business practice changes. The AOD doesn't spell out every procedural detail. Broad strokes: establish and follow policies that mitigate unfair lending risk, ensure compliance with state and federal law, and report periodically to the AG. There's also a requirement to implement clearer governance around underwriting exceptions so the company can't lean on "discretion" without structure. That means documented fair-lending testing, variable selection and monitoring, and a process to catch and fix things like CDR-type proxies and blanket knockouts before they become the subject of the next settlement.

What Fintechs and Lenders Should Do

Audit model variables for proxies. Any input that correlates strongly with race, ethnicity, sex, or other protected characteristics can produce disparate impact, even if it's "predictive." School-level metrics (CDR, school type, geography), zip code, and many "alternative" data points can act as proxies. Map your variables, run correlation and impact analyses, and document business necessity for anything that creates material disparity. If you can't justify it or mitigate it, consider dropping it. CDR is the poster child: it may seem like a reasonable risk signal, but the regulatory and plaintiff bar view is well established. Remove it or neutralize its effect.

Run disparate impact testing and document it. Test at origination and across the lifecycle (approval, denial, pricing, terms). Use standard metrics (e.g., denial rate ratios, pricing disparities by protected group) and do it before launch and on a recurring basis. When you find impact, document whether the variable or model is justified by business necessity and whether a less discriminatory alternative exists. The CFPB's Supervisory Highlights noted that examiners had identified less discriminatory alternative models using open-source debiasing tools that kept similar predictive performance. "We need this variable for accuracy" is a claim you have to support. Regulators are checking whether you could get close to the same performance with less harm.

Don't train on ungoverned human judgment. If your training data is "historical underwriter decisions," validate that those decisions were made under clear, consistent, and fair policies. If the past was arbitrary or biased, retrain on better targets (e.g., default or delinquency) or use a more controlled subset of decisions. Treat "we automated what humans did" as a fair-lending risk, not a safe harbor.

Validate adverse action reasons. Ensure the reasons you give applicants for denial or less favorable terms are accurate and derived from a validated methodology. That may require a separate reason-extraction or explanation layer that's tested and auditable. "Principal reasons" under Reg B have to reflect what actually drove the decision; if the model is a black box, you need a reliable way to map outputs to reasons.

Assume state AGs are in the room. Even if your primary regulator is the CFPB or a prudential regulator, state AGs can use state UDAP and fair-lending laws to target the same conduct. Include state-level risk in your compliance and testing roadmap, and keep an eye on which states are active on AI and lending. Massachusetts has now put a marker down. Others may follow.

The Earnest settlement is one resolution, not a new statute. It's still a clear signal. AI in lending is fair-lending territory. Variables like CDR are on the radar. State enforcers are willing to use existing law to police model design, testing, and disclosure. Fintechs that run their underwriting through algorithms would do well to treat this as the playbook: what to test, what to document, and what to strip out of the model before the next AG or regulator does it for them.

Ready to Get Started?

Get an independent
AI risk assessment

Our team of offensive security engineers can assess your AI systems for vulnerabilities, bias, and regulatory compliance gaps. Evidence-backed findings, not compliance theater.

Request a Review