Agent Trajectory
Definition
The full path of an agent run, including plans, reasoning summaries, tool calls, observations, retries, and final output.
Agentic AI Foundations & Architectures terms and explanations from the Agentic AI Glossary.
Definition
The full path of an agent run, including plans, reasoning summaries, tool calls, observations, retries, and final output.
Definition
Critique Loop is a repeated control cycle for critique loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.
Definition
The agent's ability to detect a failure, choose a safe fallback, retry carefully, or ask for help without losing the task.
Definition
The user-facing answer or action summary produced after planning, retrieval, tool calls, and validation are complete.
Definition
A single internal action in an agent run, such as planning, retrieving, calling a tool, checking output, or revising state.
Definition
A hard cap on the number of reasoning or action cycles allowed, preventing infinite loops and runaway cost.
Definition
Loop Termination is a repeated control cycle for loop termination. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.
Definition
The maximum number of back-and-forth agent steps or conversation turns allowed before stopping, escalating, or returning a result.
Definition
An agent loop where the system creates a plan, executes an action, observes the result, and decides the next step.
Definition
ReAct Loop is a repeated control cycle for react loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.
Definition
A loop pattern where the model reasons about the next move, performs an action, then uses the observation to continue.
Definition
Reflection Loop is a repeated control cycle for reflection loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.
Definition
Updating the original plan when tools fail, new information appears, constraints change, or the current path is no longer valid.
Definition
Retry Loop is a repeated control cycle for retry loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.
Definition
A private working area used to store intermediate notes, state, tool results, or structured reasoning summaries during a run.
Definition
The agent's process of identifying mistakes in its own output or actions and revising them before final delivery.
Definition
A classic agent loop: perceive the environment, decide what the information means, then take an appropriate action.
Definition
A change to stored task status, memory, variables, or workflow progress after an observation or action.
Definition
An internal reasoning unit or planning note used by some agent frameworks to organize decisions before action.
Definition
Verifier Loop is a repeated control cycle for verifier loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.
Definition
Working Memory is the active information an agent uses while solving the current task, such as the plan, recent observations, and pending steps.
Definition
The execution part of an agent loop where the agent does something, such as calling a tool or sending a response.
Definition
The execution part of an agent loop where the agent does something, such as calling a tool or sending a response In this book, it is defined specifically for Action within Agentic AI Foundations.
Definition
The set of actions, tools, APIs, or workflows an agent is allowed to execute.
Definition
Agency is a foundation concept in agentic AI. It explains how agents handle agency while pursuing goals, making decisions, acting, or staying controlled.
Definition
Agent Loop is a foundation concept in agentic AI. It explains how agents handle agent loop while pursuing goals, making decisions, acting, or staying controlled.
Definition
AI systems that pursue goals through planning, tool use, memory, observation, and iterative action rather than only generating one response.
Definition
A software component powered by an AI model that can interpret context, choose actions, call tools, and work toward a defined objective.
Definition
An Autonomous Agent can decide and act across multiple steps with limited human intervention while staying inside defined boundaries.
Definition
The degree of independence an agent has when deciding, acting, and recovering without direct human intervention.
Definition
Completion Criteria is a foundation concept in agentic AI. It explains how agents handle completion criteria while pursuing goals, making decisions, acting, or staying controlled.
Definition
Environment is a foundation concept in agentic AI. It explains how agents handle environment while pursuing goals, making decisions, acting, or staying controlled.
Definition
The component or agent role that performs planned steps by calling tools, APIs, or internal workflows.
Definition
Feedback is a foundation concept in agentic AI. It explains how agents handle feedback while pursuing goals, making decisions, acting, or staying controlled.
Definition
The target outcome an agent is trying to reach through planning and action.
Definition
A Goal-Oriented Agent works toward a defined outcome instead of only answering one isolated prompt.
Definition
The ability to optimize actions around an outcome instead of treating every request as an isolated message.
Definition
A pattern where humans review, approve, correct, or complete parts of an agent workflow.
Definition
A supervision pattern where humans monitor agent behavior and intervene when thresholds or risks appear.
Definition
Human-out-of-the-Loop is a foundation concept in agentic AI. It explains how agents handle human-out-of-the-loop while pursuing goals, making decisions, acting, or staying controlled.
Definition
The storage and retrieval layer that lets an agent preserve useful information across turns, sessions, or tasks.
Definition
A design where multiple specialized agents coordinate, delegate, review, or debate to complete a complex task.
Definition
Information returned after an agent acts, such as a tool result, error message, retrieved document, or user response.
Definition
The process of receiving and interpreting input from users, tools, documents, events, or the environment.
Definition
The agent component that decomposes a goal into steps, dependencies, and execution order.
Definition
Planning is a foundation concept in agentic AI. It explains how agents handle planning while pursuing goals, making decisions, acting, or staying controlled.
Definition
Rules and constraints that guide how an agent should decide, respond, escalate, or use tools.
Definition
Reasoning is a foundation concept in agentic AI. It explains how agents handle reasoning while pursuing goals, making decisions, acting, or staying controlled.
Definition
The decision layer that evaluates context, constraints, options, and evidence before choosing the next step.
Definition
An agent pattern where the system reviews its own plan, answer, or tool trajectory and attempts improvement.
Definition
A Semi-Autonomous Agent handles routine steps by itself but asks for human approval when risk, uncertainty, or policy limits appear.
Definition
Single-Agent System is a foundation concept in agentic AI. It explains how agents handle single-agent system while pursuing goals, making decisions, acting, or staying controlled.
Definition
The current snapshot of an agent workflow, including context, progress, memory, and pending decisions.
Definition
One unit of work inside a larger agent plan or execution sequence.
Definition
A specific piece of work the agent is asked to complete or contribute to.
Definition
A Task-Oriented Agent is specialized for a specific type of work, such as scheduling, support, testing, coding, or data analysis.
Definition
Tool Use is a foundation concept in agentic AI. It explains how agents handle tool use while pursuing goals, making decisions, acting, or staying controlled.
Definition
Intermediate values produced inside a neural network after applying transformations to inputs.
Definition
The field of building systems that can perceive, reason, learn, generate, or act in ways associated with human intelligence.
Definition
A mechanism that lets a model focus on the most relevant parts of input context.
Definition
The algorithm used to compute gradients through a neural network during training.
Definition
Systematic error, preference, or imbalance in model behavior, data, or evaluation.
Definition
A model component that generates output tokens from internal representations and prior tokens.
Definition
Machine learning based on multi-layer neural networks that learn hierarchical representations from data.
Definition
A model component that converts input into internal representations.
Definition
Measuring model or system quality against desired criteria such as correctness, safety, and business value.
Definition
Additional training that adapts a model to a task, domain, style, or preference set.
Definition
A general-purpose model trained at scale and adapted to tasks through prompting, retrieval, fine-tuning, or tool use.
Definition
A model's ability to perform well on new examples beyond its training data.
Definition
AI that creates new content such as text, code, images, audio, plans, or structured data.
Definition
An optimization method that updates parameters in the direction that reduces loss.
Definition
Running a trained model to produce predictions, answers, actions, or generated content.
Definition
A neural language model trained on large text datasets and used to understand, generate, summarize, reason, and follow instructions.
Definition
A mathematical objective that measures how wrong a model is during training.
Definition
A branch of AI where models learn patterns from data instead of being explicitly programmed for every rule.
Definition
Multimodal Model is a model type or component focused on multimodal. It helps produce, transform, rank, interpret, or evaluate AI outputs.
Definition
A model composed of connected layers that transform input data into predictions or generated outputs.
Definition
Improving a model, prompt, workflow, or system to meet accuracy, cost, speed, or reliability goals.
Definition
When a model memorizes training data patterns and performs poorly on new data.
Definition
Learned numerical values inside a model that determine how it transforms inputs into outputs.
Definition
Large-scale initial model training on broad data before task-specific adaptation.
Definition
Attention applied within the same sequence so each token can relate to other tokens.
Definition
Sequence-to-Sequence Model is a model type or component focused on sequence-to-sequence. It helps produce, transform, rank, interpret, or evaluate AI outputs.
Definition
Small Language Model is a model type or component focused on small language. It helps produce, transform, rank, interpret, or evaluate AI outputs.
Definition
The process of updating model parameters using data and an optimization objective.
Definition
A neural architecture based on attention mechanisms that powers most modern LLMs.
Definition
When a model is too simple or poorly trained to capture useful patterns.
Definition
Sensitivity of a model to changes in training data or input conditions.
Definition
Model parameters that control the strength of connections between neural network components.
Definition
An environment where the agent's sensors can access the full state needed to choose actions.
Definition
A physical or software mechanism that turns an agent's decision into an action in the environment.
Definition
A system or software program that perceives an environment and acts toward one or more goals.
Definition
The mapping from an agent's percept history to the action it should take next.
Definition
The implementation that runs on an architecture and produces actions from percepts or state.
Definition
The actions an agent performs after receiving a particular sequence of percepts.
Definition
A rule that maps a detected condition directly to an action, often used in reflex agents.
Definition
An environment whose states or actions can vary across a continuous range rather than fixed options.
Definition
An environment where the next state is fully determined by the current state and the agent's action.
Definition
An environment with a limited set of clearly separated states, actions, or time steps.
Definition
An environment that can change while an agent is deciding or acting.
Definition
A body part, motor, actuator, or software output channel that carries out an agent's action.
Definition
An environment where each decision episode is independent of previous episodes.
Definition
An agent that chooses actions by considering which steps will move it closer to a desired goal.
Definition
A human viewed as an agent with sensory organs for perception and body parts for action.
Definition
An agent that selects the action expected to maximize its performance measure using percepts and knowledge.
Definition
An environment where the agent cannot directly sense all information needed for a complete state description.
Definition
The agent's stored representation of aspects of the world that are not currently observable.
Definition
A reflex agent that maintains an internal model of how the world changes and how actions affect it.
Definition
An environment containing more than one agent, where agents may cooperate, compete, or affect each other.
Definition
An environment where an action may lead to different possible next states even from the same current state.
Definition
An environment where current actions can affect later situations and future rewards.
Definition
An environment where percepts reveal the full current state relevant to the agent's decision.
Definition
An environment where the agent receives incomplete information and must reason with uncertainty.
Definition
A task specification using Performance measure, Environment, Actuators, and Sensors to describe an agent problem.
Definition
The information an agent receives from its sensors at a particular moment.
Definition
The complete history of percepts an agent has received so far.
Definition
The sensed inputs an agent receives from its environment.
Definition
The success criteria used to judge how well an agent is performing its task.
Definition
An agent that chooses actions expected to achieve the best outcome based on its knowledge and percepts.
Definition
The quality of choosing sensible actions that are expected to produce useful results.
Definition
A physical agent that uses sensors such as cameras and effectors such as motors to act in the real world.
Definition
A device or software input channel that lets an agent detect information about its environment.
Definition
An agent that chooses actions using only the current percept and condition-action rules.
Definition
An environment in which one agent acts without strategic interaction with other agents.
Definition
A software robot that acts inside digital environments such as websites, file systems, or databases.
Definition
An agent implemented as software that acts through encoded instructions, APIs, tools, or digital actions.
Definition
An environment that does not change while an agent is deciding what to do.
Definition
A test setting where a human judge compares typed responses from a human and a machine.
Definition
A numeric preference score that represents how desirable a state or outcome is for an agent.
Definition
An agent that chooses actions by comparing the expected utility of possible outcomes.
Definition
An agent's representation of how the environment behaves and how actions change it.
Explore more chapters or test your knowledge with quizzes.