Back to Agentic AI Glossary
Agentic AI Glossary

AI Safety, Ethics & Risk

AI Safety, Ethics & Risk terms and explanations from the Agentic AI Glossary.

64 terms in this chapter
01

Adoption

Definition

How often and how successfully users start using a product or feature.

02

Adoption Metrics

Definition

Measurements showing usage, activation, retention, acceptance, and engagement with an AI system.

03

AI Assistant

Definition

An AI experience that helps users complete tasks through conversation or guided actions.

04

AI Copilot

Definition

An AI assistant that works alongside a user instead of fully replacing the user.

05

AI Product

Definition

A product that uses AI to create value for users or automate part of a workflow.

06

Build vs. Buy

Definition

The strategic choice between developing an AI capability internally or purchasing a platform or vendor solution.

07

Business Value

Definition

The measurable benefit a product creates, such as revenue, savings, speed, or quality.

08

Change Management

Definition

Preparing people, processes, communication, and governance so teams adopt AI safely.

09

Citation

Definition

A reference to the source used to support an AI answer.

10

Confidence Score

Definition

A number or label that estimates how reliable an answer, prediction, or decision is.

11

Confirmation Step

Definition

A user approval step before an AI system performs an important or risky action.

12

Enablement

Definition

Training and supporting users, operators, developers, and stakeholders to work effectively with AI systems.

13

Feedback Button

Definition

A UI control that lets users rate, correct, or comment on an AI response.

14

High-Risk Action

Definition

An action that can cause meaningful harm, loss, compliance issues, or security exposure.

15

Human Approval

Definition

A design pattern where a person approves an AI action before it is completed.

16

Low-Risk Action

Definition

An action with limited impact that can usually be automated with fewer approvals.

17

Maturity Model

Definition

A staged framework for assessing current AI capability and planning improvement.

18

Onboarding

Definition

The process of helping users understand and start using an AI product effectively.

19

Personalization

Definition

Adapting an AI experience based on user context, history, role, preferences, or account data.

20

Risky Action

Definition

An AI action that should be checked because it may affect money, data, safety, or access.

21

ROI

Definition

Return on investment: value gained relative to cost.

22

Source Attribution

Definition

Showing where information came from so users can verify the answer.

23

Time to Value

Definition

The time required before users or the organization experience measurable benefit.

24

Transparency

Definition

Making AI behavior, limitations, sources, and actions understandable to users.

25

Trust

Definition

User confidence that the AI system is useful, safe, honest, and controllable.

26

Undo Action

Definition

A way for users to reverse or recover from an AI action.

27

User Control

Definition

Giving users clear choices, approvals, limits, and ways to correct the AI system.

28

User Goal

Definition

The outcome the user wants to achieve.

29

User Intent

Definition

The meaning or purpose behind a user request.

30

User Journey

Definition

The path a user follows from need to successful outcome.

31

User Preference

Definition

A user setting or habit that helps personalize the AI experience.

32

ABAC

Definition

Attribute-based access control, where permissions depend on attributes such as user role, resource type, location, or risk.

33

Abuse Testing

Definition

Abuse Testing validates abuse behavior through examples, scenarios, checks, or automated test runs. It helps catch issues before production.

34

Access Control

Definition

Permission management that determines who or what can access data, tools, actions, and administrative functions.

35

AI Red Teaming

Definition

AI Red Teaming describes a safety or security concern around ai red teaming. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

36

AI Safety

Definition

AI Safety describes a safety or security concern around ai safety. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

37

AI Security

Definition

AI Security describes a safety or security concern around ai security. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

38

Compliance

Definition

Conformance with legal, regulatory, contractual, and organizational requirements.

39

Credential Leakage

Definition

Credential Leakage describes a safety or security concern around credential leakage. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

40

Data Exfiltration

Definition

Data Exfiltration describes a safety or security concern around data exfiltration. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

41

Data Privacy

Definition

Protection of personal, sensitive, or confidential data throughout AI processing and storage.

42

Excessive Agency

Definition

Excessive Agency describes a safety or security concern around excessive agency. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

43

Explainability

Definition

The ability to describe why an agent produced an output or chose an action in terms understandable to humans.

44

Governance

Definition

Policies, ownership, processes, and controls that make AI systems accountable and responsible.

45

Indirect Prompt Injection

Definition

Indirect Prompt Injection describes a safety or security concern around indirect prompt injection. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

46

Insecure Output Handling

Definition

Insecure Output Handling describes a safety or security concern around insecure output handling. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

47

Jailbreak

Definition

Jailbreak describes a safety or security concern around jailbreak. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

48

Least Privilege

Definition

Least Privilege describes a safety or security concern around least privilege. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

49

LLM Security

Definition

LLM Security describes a safety or security concern around llm security. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

50

Model Denial of Service

Definition

Model Denial of Service describes a safety or security concern around model denial of service. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

51

Policy Violation

Definition

Policy Violation describes a safety or security concern around policy violation. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

52

Privilege Escalation

Definition

Privilege Escalation describes a safety or security concern around privilege escalation. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

53

RAG Poisoning

Definition

RAG Poisoning describes a safety or security concern around rag poisoning. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

54

RBAC

Definition

Role-based access control, where permissions are assigned to roles and users inherit access from their role.

55

Red Teaming

Definition

Red Teaming describes a safety or security concern around red teaming. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

56

Sandboxing

Definition

Sandboxing describes a safety or security concern around sandboxing. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

57

Secrets Leakage

Definition

Secrets Leakage describes a safety or security concern around secrets leakage. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

58

Secure Logging

Definition

Secure Logging describes a safety or security concern around secure logging. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

59

Security

Definition

Controls that protect AI systems, data, models, tools, and users from misuse or unauthorized access.

60

Supply Chain Vulnerability

Definition

Supply Chain Vulnerability describes a safety or security concern around supply chain vulnerability. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

61

Threat Modeling

Definition

Threat Modeling describes a safety or security concern around threat modeling. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

62

Tool Misuse

Definition

Tool Misuse describes a safety or security concern around tool misuse. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

63

Training Data Poisoning

Definition

Training Data Poisoning describes a safety or security concern around training data poisoning. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

64

Unauthorized Tool Access

Definition

Unauthorized Tool Access describes a safety or security concern around unauthorized tool access. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.

Explore more chapters or test your knowledge with quizzes.