Back to AI Failure Dictionary
AI Failure Dictionary

Security, Privacy & Governance Failures

Security, Privacy & Governance Failures terms and explanations from the AI Failure Dictionary.

24 terms in this chapter
01

Prompt Injection Attack

Definition

Malicious input tries to override system behavior.

Solution

Use instruction isolation, content filtering, permission controls, and output validation.

02

Jailbreak

Definition

A prompt bypasses model safety restrictions.

Solution

Use stronger safety policies, red-team testing, output moderation, and layered guardrails.

03

Data Poisoning

Definition

Malicious or bad data is inserted into the system.

Solution

Use trusted sources, anomaly detection, review workflows, and secure ingestion.

04

Training Data Poisoning

Definition

Harmful examples are inserted during training.

Solution

Validate datasets, verify sources, and audit suspicious samples.

05

Retrieval Poisoning

Definition

Malicious content is added to the RAG knowledge base and later retrieved.

Solution

Use document trust scoring, ingestion approval, and source validation.

06

Model Extraction

Definition

Attackers try to copy the model through repeated queries.

Solution

Use rate limits, monitoring, access control, and abuse detection.

07

Model Inversion

Definition

Attackers try to recover private training data from outputs.

Solution

Use privacy-preserving training, regularization, access limits, and output filtering.

08

Membership Inference

Definition

Attackers try to determine whether a record was in training data.

Solution

Use differential privacy, regularization, and restricted access.

09

Data Exfiltration

Definition

Sensitive data is extracted through the model or connected tools.

Solution

Use data access controls, output inspection, and least-privilege tool permissions.

10

PII Leakage

Definition

Personal information is exposed in outputs, logs, prompts, or datasets.

Solution

Use redaction, masking, privacy filters, and safe logging practices.

11

Secret Leakage

Definition

API keys, credentials, tokens, or internal details are exposed.

Solution

Use secret scanning, vaults, key rotation, and strict logging rules.

12

Over-Permissioned Model

Definition

The AI system has more access than needed.

Solution

Apply least-privilege access and scope permissions tightly.

13

Over-Permissioned Tool

Definition

An agent tool has broader access than necessary.

Solution

Narrow tool permissions and require approval for risky actions.

14

Unauthorized Action

Definition

The AI performs an action without proper approval.

Solution

Use confirmation steps, authorization checks, and audit logging.

15

Unsafe Tool Access

Definition

An agent can call tools that modify systems without guardrails.

Solution

Use sandboxing, approvals, permission scopes, and audit logs.

16

Supply Chain Risk

Definition

Models, datasets, packages, or tools introduce security vulnerabilities.

Solution

Use dependency scanning, trusted sources, model provenance, and signed artifacts.

17

Access Control Failure

Definition

Users or systems access data or functions they should not.

Solution

Use authentication, authorization, policy checks, and access reviews.

18

Auditability Failure

Definition

The organization cannot explain what the model did and why.

Solution

Keep logs, traces, model versions, data versions, prompts, and decision records.

19

Compliance Failure

Definition

The AI system violates legal, regulatory, or internal policy requirements.

Solution

Run governance reviews, compliance testing, and documentation checks.

20

Data Retention Failure

Definition

Data is stored longer than allowed.

Solution

Use retention policies, automated deletion, and storage audits.

22

Policy Enforcement Failure

Definition

Governance rules exist on paper but are not enforced in the system.

Solution

Implement policy checks directly in pipelines, tools, and release gates.

23

Safety Guardrail Failure

Definition

Filters, policies, or validators fail to block risky output.

Solution

Use layered guardrails, adversarial testing, and monitoring.

24

Insecure Output Handling

Definition

Model output is trusted directly without validation.

Solution

Validate outputs before using them in tools, code, databases, or user-facing actions.

Explore more chapters or test your knowledge with quizzes.