Back to AI Failure Dictionary
AI Failure Dictionary

Evaluation & MLOps Deployment Failures

Evaluation & MLOps Deployment Failures terms and explanations from the AI Failure Dictionary.

73 terms in this chapter
01

Weak Test Coverage

Definition

Evaluation does not cover enough real-world cases.

Solution

Expand the test set with edge cases, production examples, and subgroup coverage.

02

Gold Dataset Weakness

Definition

The trusted evaluation set is incomplete or low quality.

Solution

Use expert review, regular refreshes, and quality audits.

03

Labeling Inconsistency

Definition

Human evaluators label similar examples differently.

Solution

Improve guidelines, reviewer training, and agreement checks.

04

Human Evaluation Drift

Definition

Reviewers become inconsistent over time.

Solution

Run calibration sessions and include benchmark examples.

05

Evaluation Bias

Definition

The test set or metric favors certain outputs, users, or cases.

Solution

Audit evaluation data across scenarios, languages, and user groups.

06

Metric Gaming

Definition

The model improves the metric without improving real quality.

Solution

Use multiple metrics, human review, and outcome-based evaluation.

07

Metric Misalignment

Definition

The metric does not match actual user or business value.

Solution

Choose metrics that reflect usefulness, safety, reliability, and business impact.

08

False Positive

Definition

The model incorrectly predicts something is present.

Solution

Tune thresholds, add hard negative examples, and review precision errors.

09

False Negative

Definition

The model fails to detect something that is present.

Solution

Improve recall with more positive examples, threshold tuning, and feature improvements.

10

Precision Failure

Definition

Too many predicted positives are wrong.

Solution

Raise thresholds, improve features, or add stronger negative examples.

11

Recall Failure

Definition

Too many real positives are missed.

Solution

Lower thresholds, add positive data, and improve retrieval or feature coverage.

12

F1 Misinterpretation

Definition

A single F1 score hides important precision-recall tradeoffs.

Solution

Review precision and recall separately by class and use case.

13

Calibration Failure

Definition

Confidence scores do not reflect actual correctness.

Solution

Use calibration methods and evaluate confidence reliability.

14

Offline-Online Gap

Definition

Offline evaluation results do not match production behavior.

Solution

Use production-like evaluation data and online testing.

15

Regression

Definition

A new model version performs worse than the previous version.

Solution

Use regression tests, release gates, and rollback plans.

16

A/B Test Contamination

Definition

Experiment groups affect each other or are not separated correctly.

Solution

Use clean randomization, isolation, and experiment monitoring.

17

Sample Size Failure

Definition

The test does not include enough examples to support a conclusion.

Solution

Increase sample size and check statistical power.

18

Benchmark Contamination

Definition

Benchmark or test data appears in the model's training data.

Solution

Use private, fresh, or carefully controlled evaluation sets.

19

LLM-as-Judge Bias

Definition

An LLM evaluator favors certain writing styles, lengths, or model outputs.

Solution

Calibrate judges, use multiple judges, and validate with humans.

20

RAG Evaluation Failure

Definition

Evaluation checks final answers but ignores retrieval quality or citation accuracy.

Solution

Measure retrieval, grounding, answer correctness, and citation support separately.

21

Safety Evaluation Gap

Definition

The system is not tested against harmful, adversarial, or policy-sensitive cases.

Solution

Use red-team prompts, adversarial tests, and safety rubrics.

22

Edge Case Blind Spot

Definition

Rare but important cases are missing from evaluation.

Solution

Add edge-case suites and scenario-based tests.

23

Fairness Evaluation Gap

Definition

The system is not tested across demographic or user groups.

Solution

Measure subgroup performance and fairness metrics.

24

Robustness Evaluation Gap

Definition

The system is not tested against noisy, adversarial, or unusual inputs.

Solution

Add perturbation tests, adversarial inputs, and stress testing.

25

Explainability Gap

Definition

The team cannot clearly explain why the model made a decision.

Solution

Use interpretable features, explanations, decision records, and audit trails.

26

Deployment Failure

Definition

A model release breaks or behaves incorrectly in production.

Solution

Use staging tests, canary releases, release gates, and rollback plans.

27

Model Versioning Failure

Definition

The team cannot track which model version is running.

Solution

Use a model registry, version tags, and deployment metadata.

28

Data Versioning Failure

Definition

The team cannot reproduce which data was used for training.

Solution

Version datasets, snapshots, transformations, and training data references.

29

Feature Versioning Failure

Definition

Feature definitions are not tracked across training and serving.

Solution

Version feature definitions and connect them to model artifacts.

30

Training-Serving Skew

Definition

Training logic differs from production inference logic.

Solution

Share preprocessing and feature logic between training and serving.

31

Environment Mismatch

Definition

Development, staging, and production environments behave differently.

Solution

Use containers, pinned dependencies, and infrastructure-as-code.

32

Dependency Conflict

Definition

Library or package versions break the ML system.

Solution

Pin dependencies and test environments before release.

33

Containerization Failure

Definition

The model does not run correctly inside its deployment container.

Solution

Test containers with production-like inputs before deployment.

34

CI/CD Failure

Definition

Automated testing, build, or deployment pipelines break.

Solution

Add pipeline tests, clear release gates, and rollback procedures.

35

Rollback Failure

Definition

The team cannot safely return to a previous working version.

Solution

Keep versioned artifacts and automate rollback workflows.

36

Canary Failure

Definition

A small rollout does not detect issues before full deployment.

Solution

Use better canary metrics, traffic segmentation, and quality monitoring.

37

Shadow Deployment Failure

Definition

A shadow model is tested incorrectly or not monitored properly.

Solution

Compare shadow predictions against real outcomes and baseline models.

38

Model Registry Failure

Definition

Models are not approved, tagged, or stored correctly.

Solution

Use registry governance, approval workflows, and artifact validation.

39

Artifact Corruption

Definition

Model files, tokenizer files, or configuration files are damaged or mismatched.

Solution

Use checksums, artifact validation, and compatibility tests.

40

Inference Service Failure

Definition

The production prediction service becomes unavailable.

Solution

Use health checks, autoscaling, failover, and incident runbooks.

41

API Contract Failure

Definition

The model endpoint input or output format changes unexpectedly.

Solution

Use API contracts, backward compatibility tests, and schema validation.

42

Autoscaling Failure

Definition

Infrastructure cannot scale fast enough for traffic.

Solution

Use load testing, scaling policies, queueing, and capacity planning.

43

Cold Start Latency

Definition

The first request is slow because infrastructure or the model is not warmed up.

Solution

Use warm pools, caching, optimized model loading, and pre-warming.

44

GPU Resource Failure

Definition

GPU memory, scheduling, or availability problems break inference or training.

Solution

Use resource limits, monitoring, optimized batch sizes, and fallback capacity.

45

Cost Explosion

Definition

Model usage becomes too expensive because of traffic, tokens, compute, or inefficient design.

Solution

Use caching, model routing, prompt optimization, budgets, and usage limits.

46

Token Cost Overrun

Definition

Prompts or responses consume too many tokens.

Solution

Shorten prompts, filter retrieval context, set output limits, and summarize.

47

Latency Failure

Definition

The model takes too long to generate a response.

Solution

Use smaller models, caching, batching, streaming, optimized serving, or model routing.

48

Throughput Failure

Definition

The system cannot handle the required request volume.

Solution

Use batching, autoscaling, queue management, and performance testing.

49

SLA Violation

Definition

The system fails reliability, speed, or uptime requirements.

Solution

Use SLOs, error budgets, monitoring, and reliability engineering.

50

Release Governance Failure

Definition

Models are shipped without proper review, approval, or documentation.

Solution

Use release checklists, approvals, model cards, and audit trails.

51

Production Configuration Drift

Definition

Production settings become different from approved settings.

Solution

Use config versioning, drift detection, and infrastructure-as-code.

52

Model Drift

Definition

Model performance degrades over time.

Solution

Monitor performance, collect labels, and retrain or refresh the model.

53

Concept Drift

Definition

The relationship between inputs and labels changes.

Solution

Detect drift and retrain with newer labeled data.

54

Covariate Shift

Definition

Input feature distribution changes.

Solution

Track feature distributions and adapt data, features, or model strategy.

55

Label Shift

Definition

Output class distribution changes.

Solution

Monitor class distribution and recalibrate or retrain.

56

Prediction Drift

Definition

The distribution of model predictions changes.

Solution

Compare prediction trends against baselines and investigate anomalies.

57

Feature Drift

Definition

Input features change in meaning or distribution.

Solution

Monitor feature statistics and trigger alerts for major shifts.

58

Data Drift

Definition

Production data differs from training data.

Solution

Monitor data distributions and retrain when shifts affect quality.

59

Silent Failure

Definition

The system produces bad results without visible errors.

Solution

Use quality checks, anomaly alerts, sampled human review, and outcome monitoring.

60

Monitoring Gap

Definition

Important model, data, or system metrics are not tracked.

Solution

Monitor data, model quality, latency, cost, safety, and business outcomes together.

61

Alert Fatigue

Definition

Too many alerts cause teams to ignore important signals.

Solution

Tune alert thresholds, deduplicate alerts, and prioritize severity.

62

Missing Alert

Definition

A serious issue occurs without triggering an alert.

Solution

Add alerts for critical failure modes and test alert coverage.

63

Observability Failure

Definition

Logs, metrics, traces, and examples are insufficient for debugging.

Solution

Use structured logging, distributed tracing, metrics, and request-level audit records.

64

Logging Gap

Definition

The system does not store enough information to investigate failures.

Solution

Log inputs, outputs, model versions, data versions, errors, and decisions safely.

65

Feedback Loop Failure

Definition

User feedback is not collected or connected to improvement.

Solution

Connect feedback to evaluation, labeling, retraining, and product decisions.

66

Delayed Label Problem

Definition

True labels arrive too late to monitor performance quickly.

Solution

Use proxy metrics and delayed performance tracking.

67

Latency Regression

Definition

A new version makes inference slower.

Solution

Run performance tests before deployment and monitor latency after release.

68

Error Budget Burn

Definition

The system consumes its allowed failure budget too quickly.

Solution

Pause risky releases and prioritize reliability fixes.

69

Dashboard Blind Spot

Definition

Dashboards show system health but miss model quality issues.

Solution

Add model quality, retrieval quality, safety, and user outcome dashboards.

70

Data Freshness Monitoring Failure

Definition

The system does not detect stale input data.

Solution

Add freshness metrics and stale-data alerts.

71

Quality Monitoring Failure

Definition

The system monitors uptime but not answer or prediction quality.

Solution

Add quality sampling, human review, and automated evaluation.

72

User Behavior Drift

Definition

Users change how they interact with the system, reducing performance.

Solution

Monitor usage patterns and update prompts, UX, or models.

73

Feedback Bias

Definition

Feedback comes mostly from certain user groups, creating misleading signals.

Solution

Analyze feedback coverage and balance feedback sources.

Explore more chapters or test your knowledge with quizzes.