Adoption
Definition
How often and how successfully users start using a product or feature.
AI Safety, Ethics & Risk terms and explanations from the Agentic AI Glossary.
Definition
How often and how successfully users start using a product or feature.
Definition
Measurements showing usage, activation, retention, acceptance, and engagement with an AI system.
Definition
An AI experience that helps users complete tasks through conversation or guided actions.
Definition
An AI assistant that works alongside a user instead of fully replacing the user.
Definition
A product that uses AI to create value for users or automate part of a workflow.
Definition
The strategic choice between developing an AI capability internally or purchasing a platform or vendor solution.
Definition
The measurable benefit a product creates, such as revenue, savings, speed, or quality.
Definition
Preparing people, processes, communication, and governance so teams adopt AI safely.
Definition
A reference to the source used to support an AI answer.
Definition
A number or label that estimates how reliable an answer, prediction, or decision is.
Definition
A user approval step before an AI system performs an important or risky action.
Definition
Training and supporting users, operators, developers, and stakeholders to work effectively with AI systems.
Definition
A UI control that lets users rate, correct, or comment on an AI response.
Definition
An action that can cause meaningful harm, loss, compliance issues, or security exposure.
Definition
A design pattern where a person approves an AI action before it is completed.
Definition
An action with limited impact that can usually be automated with fewer approvals.
Definition
A staged framework for assessing current AI capability and planning improvement.
Definition
The process of helping users understand and start using an AI product effectively.
Definition
Adapting an AI experience based on user context, history, role, preferences, or account data.
Definition
An AI action that should be checked because it may affect money, data, safety, or access.
Definition
Return on investment: value gained relative to cost.
Definition
Showing where information came from so users can verify the answer.
Definition
The time required before users or the organization experience measurable benefit.
Definition
Making AI behavior, limitations, sources, and actions understandable to users.
Definition
User confidence that the AI system is useful, safe, honest, and controllable.
Definition
A way for users to reverse or recover from an AI action.
Definition
Giving users clear choices, approvals, limits, and ways to correct the AI system.
Definition
The outcome the user wants to achieve.
Definition
The meaning or purpose behind a user request.
Definition
The path a user follows from need to successful outcome.
Definition
A user setting or habit that helps personalize the AI experience.
Definition
Attribute-based access control, where permissions depend on attributes such as user role, resource type, location, or risk.
Definition
Abuse Testing validates abuse behavior through examples, scenarios, checks, or automated test runs. It helps catch issues before production.
Definition
Permission management that determines who or what can access data, tools, actions, and administrative functions.
Definition
AI Red Teaming describes a safety or security concern around ai red teaming. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
AI Safety describes a safety or security concern around ai safety. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
AI Security describes a safety or security concern around ai security. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Conformance with legal, regulatory, contractual, and organizational requirements.
Definition
Credential Leakage describes a safety or security concern around credential leakage. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Data Exfiltration describes a safety or security concern around data exfiltration. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Protection of personal, sensitive, or confidential data throughout AI processing and storage.
Definition
Excessive Agency describes a safety or security concern around excessive agency. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
The ability to describe why an agent produced an output or chose an action in terms understandable to humans.
Definition
Policies, ownership, processes, and controls that make AI systems accountable and responsible.
Definition
Indirect Prompt Injection describes a safety or security concern around indirect prompt injection. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Insecure Output Handling describes a safety or security concern around insecure output handling. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Jailbreak describes a safety or security concern around jailbreak. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Least Privilege describes a safety or security concern around least privilege. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
LLM Security describes a safety or security concern around llm security. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Model Denial of Service describes a safety or security concern around model denial of service. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Policy Violation describes a safety or security concern around policy violation. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Privilege Escalation describes a safety or security concern around privilege escalation. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
RAG Poisoning describes a safety or security concern around rag poisoning. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Role-based access control, where permissions are assigned to roles and users inherit access from their role.
Definition
Red Teaming describes a safety or security concern around red teaming. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Sandboxing describes a safety or security concern around sandboxing. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Secrets Leakage describes a safety or security concern around secrets leakage. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Secure Logging describes a safety or security concern around secure logging. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Controls that protect AI systems, data, models, tools, and users from misuse or unauthorized access.
Definition
Supply Chain Vulnerability describes a safety or security concern around supply chain vulnerability. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Threat Modeling describes a safety or security concern around threat modeling. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Tool Misuse describes a safety or security concern around tool misuse. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Training Data Poisoning describes a safety or security concern around training data poisoning. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Definition
Unauthorized Tool Access describes a safety or security concern around unauthorized tool access. Teams use it to reduce misuse, data exposure, unauthorized access, or harmful AI behavior.
Explore more chapters or test your knowledge with quizzes.