Skip to main content

Table 11 Terminology for each model risk category

From: Measuring the model risk-adjusted performance of machine learning algorithms in credit default prediction

Statistics Technology Market conduct
Stability Transparency Privacy
Over-fitting Carbon footprint Auditability
Over-fit External providers Interpretability
Hyper parameters Dependencies Biases
Dynamic calibration Cyber-risk Expert judgement
Feature engineering Third-party Expert personnel
Forecast Cloud Human judgement
Parameters IT system Human-in-the-loop
Test Legacy Human-on-the-loop
Calibration Infrastructure Governance
Features Computing Ethics
Explanatory variables Computational power Ethical
AUC Deployment Human
ROC in-house Compliance
Recall Development Management
Prediction Pilot Explainability
Logit ICT Internal control
Algorithm Architecture Knowledge
Scenario Resilience Consumers
Data quality Security Consumer protection
Back-testing Operational risk Discrimination
Benchmarking Outsourcing risk Uncertainty
Model Reversibility Accountability
Optimisation DevOps Market abuse
Dimensionality Software Complexity
Validation Hosting Decision making
Metrics Cyber-risk Soundness
Structured Model stealing Conduct
Unstructured Poisoning attacks Internal audit
Semi-structured Adversarial attacks Fairness
Classification Open-source Fair
Tree-based   Diversity
Neural network   Oversight
Regression   Simplicity
Clustering   GDPR
Support vector machine   Transparency
Reinforcement learning   Traceability
Parametric   Opaqueness
Non-parametric   Black-box
Performance   Black boxes
Prociclality   Surrogates
Train   Trust
Training   Trustworthiness
Volatility   Influence
Tuning   SHAP
Threshold   Shapley
Cross-validation   Independent conditional expectations
Compilation   Partial dependence plots
Out-of-sample statistical   ICE
Predictive   PDP
Challenger model   Complex
Confidence level