Processing millions of credit applications with 94.7% accuracy and 100% regulatory audit compliance—every decision fully explainable to borrowers and examiners.
Model Accuracy
Audit Compliance
Review Time
Key Outcomes
SHAP-based explainability achieves 100% ECOA adverse action code compliance automatically
Model accuracy improved while expanding credit access to 22% more qualified borrowers
Regulatory audit cycles shortened from 4 months to days with examiner-facing portal
$2.1M annual compliance cost reduced through automated SR 11-7 documentation
Gradient boosting with 1,400+ signals outperforms bureau-only models without fair lending risk
Enova deployed an explainable AI credit decisioning system using gradient boosting algorithms with SHAP-based attribution that generates plain-language rationales for each approval and decline decision. The model incorporates 1,400+ behavioral and alternative data signals while the explanation framework satisfies ECOA, CFPB, and SR 11-7 regulatory requirements—producing adverse action codes that examiners can audit without exposing proprietary model logic.
Enova International is a leading technology-enabled financial services company providing credit products to non-prime consumers and small businesses across the US, UK, Brazil, and Australia. With millions of loan applications processed annually, Enova faces both high-volume decisioning demands and stringent regulatory scrutiny from the CFPB, FTC, and banking regulators.
Enova's traditional credit models were accurate but completely opaque. Regulatory examinations required months of effort to explain model decisions, adverse action codes were non-existent, and the inability to demonstrate fair lending compliance created existential regulatory risk.
4 months
Audit Duration
Each regulatory audit required months of manual analysis to reconstruct model logic for examiners.
0%
Adverse Action Codes
Zero explainability on declined applications—a direct ECOA compliance violation.
$2.1M
Annual Compliance Cost
Regulatory overhead from opaque model documentation and examination support.
AGIX Technologies built an explainable AI credit decisioning system using gradient boosting with SHAP value attribution that generates machine-readable and human-readable rationales for every decision—meeting ECOA, CFPB, and SR 11-7 requirements automatically.
Gradient Boosting Core Model
XGBoost model trained on 1,400+ credit, behavioral, and alternative data signals achieving 94.7% prediction accuracy on held-out test sets.
SHAP Attribution Engine
SHAP (SHapley Additive exPlanations) calculates each feature's marginal contribution to every individual decision—generating transparent factor rankings.
ECOA Adverse Action Codes
Automated generation of plain-language adverse action codes compliant with ECOA requirements, delivered to declined applicants within milliseconds of decision.
Fair Lending Bias Monitoring
Continuous disparate impact monitoring across protected class proxies with automated flagging when model drift creates potential fair lending exposure.
Regulatory Audit Interface
Examiner-facing portal allowing regulators to interrogate specific decisions, view feature contributions, and validate model documentation without accessing proprietary weights.
Model Governance Dashboard
SR 11-7 compliant model governance tracking including champion-challenger testing, performance monitoring, and model validation documentation.
Model Accuracy
On held-out test data vs. prior black-box model's undocumented estimates
Approval Rate
More qualified borrowers approved without any increase in realized default rates
Decision Latency
Automated decisions in milliseconds vs. days for manual review queues
ECOA Compliance
Every decline has a compliant adverse action code—from 0% before deployment
"We went from four-month audit nightmares to examiners being satisfied in days. The SHAP explanations let regulators see exactly what the model was doing without us having to build custom documentation every time."
Chief Risk Officer
Enova International
Collect 1,400+ signals from bureau and alternative data
When an application is submitted, the system pulls credit bureau data, bank transaction history, employment verification, and 1,200+ additional behavioral signals in under 500ms via parallel API calls.
SHAP Chosen for Additive Properties
SHAP's additive nature means factor contributions sum to the final score, making explanations mathematically precise rather than approximate approximations.
Compliance Built Into Architecture
Adverse action code generation was built into the model pipeline rather than bolted on afterward, ensuring every decision produces compliant documentation automatically.
Examiner-Specific Interface Design
The regulatory audit portal was designed with actual CFPB examination workflows in mind, reducing examiner friction and building goodwill during the first examination cycle.
Champion-Challenger Testing
The explainable model was validated against the prior black-box model in parallel before cutover, proving both accuracy parity and compliance improvement simultaneously.
Alternative Data Validated for Fairness
Every alternative data signal was tested for disparate impact before inclusion, ensuring expanded signal set improved accuracy without creating fair lending exposure.
Model Documentation Automated
SR 11-7 model documentation is generated automatically from model metadata, eliminating the manual documentation burden that previously cost millions annually.
Every AI system has constraints. Here's what to know before building something similar.
Thin-File Applicants Still Challenging
Borrowers with fewer than 12 months of credit history have limited signal, making the model rely more on alternative data where accuracy is lower.
Model Requires Regular Revalidation
SR 11-7 requires annual revalidation. Economic regime changes can cause model drift that reduces accuracy below acceptable thresholds before the next revalidation cycle.
Alternative Data Jurisdiction Limits
Some alternative data sources are unavailable or legally restricted in certain states, requiring state-specific model variants with reduced signal sets.
Explanation Lag at Scale
SHAP computation adds ~50ms to decision latency at scale. For extremely high-volume bursts, this requires careful infrastructure capacity planning.
Explore the services, industry solutions, and intelligence types that power this system.
Common questions about building explainable credit decisioning systems like the one deployed at Enova.