Back to Agentic AI Glossary
Agentic AI Glossary

Agentic AI Foundations & Architectures

Agentic AI Foundations & Architectures terms and explanations from the Agentic AI Glossary.

131 terms in this chapter
01

Agent Trajectory

Definition

The full path of an agent run, including plans, reasoning summaries, tool calls, observations, retries, and final output.

02

Critique Loop

Definition

Critique Loop is a repeated control cycle for critique loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.

03

Error Recovery

Definition

The agent's ability to detect a failure, choose a safe fallback, retry carefully, or ask for help without losing the task.

04

Final Response

Definition

The user-facing answer or action summary produced after planning, retrieval, tool calls, and validation are complete.

05

Intermediate Step

Definition

A single internal action in an agent run, such as planning, retrieving, calling a tool, checking output, or revising state.

06

Iteration Limit

Definition

A hard cap on the number of reasoning or action cycles allowed, preventing infinite loops and runaway cost.

07

Loop Termination

Definition

Loop Termination is a repeated control cycle for loop termination. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.

08

Max Turns

Definition

The maximum number of back-and-forth agent steps or conversation turns allowed before stopping, escalating, or returning a result.

09

Plan-Act-Observe

Definition

An agent loop where the system creates a plan, executes an action, observes the result, and decides the next step.

10

ReAct Loop

Definition

ReAct Loop is a repeated control cycle for react loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.

11

Reason-Act-Observe

Definition

A loop pattern where the model reasons about the next move, performs an action, then uses the observation to continue.

12

Reflection Loop

Definition

Reflection Loop is a repeated control cycle for reflection loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.

13

Replanning

Definition

Updating the original plan when tools fail, new information appears, constraints change, or the current path is no longer valid.

14

Retry Loop

Definition

Retry Loop is a repeated control cycle for retry loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.

15

Scratchpad

Definition

A private working area used to store intermediate notes, state, tool results, or structured reasoning summaries during a run.

16

Self-Correction

Definition

The agent's process of identifying mistakes in its own output or actions and revising them before final delivery.

17

Sense-Think-Act

Definition

A classic agent loop: perceive the environment, decide what the information means, then take an appropriate action.

18

State Update

Definition

A change to stored task status, memory, variables, or workflow progress after an observation or action.

19

Thought

Definition

An internal reasoning unit or planning note used by some agent frameworks to organize decisions before action.

20

Verifier Loop

Definition

Verifier Loop is a repeated control cycle for verifier loop. It lets an agent adjust its next step after reasoning, taking action, observing results, or receiving critique.

21

Working Memory

Definition

Working Memory is the active information an agent uses while solving the current task, such as the plan, recent observations, and pending steps.

22

Acting

Definition

The execution part of an agent loop where the agent does something, such as calling a tool or sending a response.

23

Action

Definition

The execution part of an agent loop where the agent does something, such as calling a tool or sending a response In this book, it is defined specifically for Action within Agentic AI Foundations.

24

Action Space

Definition

The set of actions, tools, APIs, or workflows an agent is allowed to execute.

25

Agency

Definition

Agency is a foundation concept in agentic AI. It explains how agents handle agency while pursuing goals, making decisions, acting, or staying controlled.

26

Agent Loop

Definition

Agent Loop is a foundation concept in agentic AI. It explains how agents handle agent loop while pursuing goals, making decisions, acting, or staying controlled.

27

Agentic AI

Definition

AI systems that pursue goals through planning, tool use, memory, observation, and iterative action rather than only generating one response.

28

AI Agent

Definition

A software component powered by an AI model that can interpret context, choose actions, call tools, and work toward a defined objective.

29

Autonomous Agent

Definition

An Autonomous Agent can decide and act across multiple steps with limited human intervention while staying inside defined boundaries.

30

Autonomy

Definition

The degree of independence an agent has when deciding, acting, and recovering without direct human intervention.

31

Completion Criteria

Definition

Completion Criteria is a foundation concept in agentic AI. It explains how agents handle completion criteria while pursuing goals, making decisions, acting, or staying controlled.

32

Environment

Definition

Environment is a foundation concept in agentic AI. It explains how agents handle environment while pursuing goals, making decisions, acting, or staying controlled.

33

Executor

Definition

The component or agent role that performs planned steps by calling tools, APIs, or internal workflows.

34

Feedback

Definition

Feedback is a foundation concept in agentic AI. It explains how agents handle feedback while pursuing goals, making decisions, acting, or staying controlled.

35

Goal

Definition

The target outcome an agent is trying to reach through planning and action.

36

Goal-Oriented Agent

Definition

A Goal-Oriented Agent works toward a defined outcome instead of only answering one isolated prompt.

37

Goal-Oriented Behavior

Definition

The ability to optimize actions around an outcome instead of treating every request as an isolated message.

38

Human-in-the-Loop

Definition

A pattern where humans review, approve, correct, or complete parts of an agent workflow.

39

Human-on-the-Loop

Definition

A supervision pattern where humans monitor agent behavior and intervene when thresholds or risks appear.

40

Human-out-of-the-Loop

Definition

Human-out-of-the-Loop is a foundation concept in agentic AI. It explains how agents handle human-out-of-the-loop while pursuing goals, making decisions, acting, or staying controlled.

41

Memory

Definition

The storage and retrieval layer that lets an agent preserve useful information across turns, sessions, or tasks.

42

Multi-Agent System

Definition

A design where multiple specialized agents coordinate, delegate, review, or debate to complete a complex task.

43

Observation

Definition

Information returned after an agent acts, such as a tool result, error message, retrieved document, or user response.

44

Perception

Definition

The process of receiving and interpreting input from users, tools, documents, events, or the environment.

45

Planner

Definition

The agent component that decomposes a goal into steps, dependencies, and execution order.

46

Planning

Definition

Planning is a foundation concept in agentic AI. It explains how agents handle planning while pursuing goals, making decisions, acting, or staying controlled.

47

Policy

Definition

Rules and constraints that guide how an agent should decide, respond, escalate, or use tools.

48

Reasoning

Definition

Reasoning is a foundation concept in agentic AI. It explains how agents handle reasoning while pursuing goals, making decisions, acting, or staying controlled.

49

Reasoning Engine

Definition

The decision layer that evaluates context, constraints, options, and evidence before choosing the next step.

50

Reflection

Definition

An agent pattern where the system reviews its own plan, answer, or tool trajectory and attempts improvement.

51

Semi-Autonomous Agent

Definition

A Semi-Autonomous Agent handles routine steps by itself but asks for human approval when risk, uncertainty, or policy limits appear.

52

Single-Agent System

Definition

Single-Agent System is a foundation concept in agentic AI. It explains how agents handle single-agent system while pursuing goals, making decisions, acting, or staying controlled.

53

State

Definition

The current snapshot of an agent workflow, including context, progress, memory, and pending decisions.

54

Step

Definition

One unit of work inside a larger agent plan or execution sequence.

55

Task

Definition

A specific piece of work the agent is asked to complete or contribute to.

56

Task-Oriented Agent

Definition

A Task-Oriented Agent is specialized for a specific type of work, such as scheduling, support, testing, coding, or data analysis.

57

Tool Use

Definition

Tool Use is a foundation concept in agentic AI. It explains how agents handle tool use while pursuing goals, making decisions, acting, or staying controlled.

58

Activations

Definition

Intermediate values produced inside a neural network after applying transformations to inputs.

59

Artificial Intelligence

Definition

The field of building systems that can perceive, reason, learn, generate, or act in ways associated with human intelligence.

60

Attention

Definition

A mechanism that lets a model focus on the most relevant parts of input context.

61

Backpropagation

Definition

The algorithm used to compute gradients through a neural network during training.

62

Bias

Definition

Systematic error, preference, or imbalance in model behavior, data, or evaluation.

63

Decoder

Definition

A model component that generates output tokens from internal representations and prior tokens.

64

Deep Learning

Definition

Machine learning based on multi-layer neural networks that learn hierarchical representations from data.

65

Encoder

Definition

A model component that converts input into internal representations.

66

Evaluation

Definition

Measuring model or system quality against desired criteria such as correctness, safety, and business value.

67

Fine-Tuning

Definition

Additional training that adapts a model to a task, domain, style, or preference set.

68

Foundation Model

Definition

A general-purpose model trained at scale and adapted to tasks through prompting, retrieval, fine-tuning, or tool use.

69

Generalization

Definition

A model's ability to perform well on new examples beyond its training data.

70

Generative AI

Definition

AI that creates new content such as text, code, images, audio, plans, or structured data.

71

Gradient Descent

Definition

An optimization method that updates parameters in the direction that reduces loss.

72

Inference

Definition

Running a trained model to produce predictions, answers, actions, or generated content.

73

Large Language Model

Definition

A neural language model trained on large text datasets and used to understand, generate, summarize, reason, and follow instructions.

74

Loss Function

Definition

A mathematical objective that measures how wrong a model is during training.

75

Machine Learning

Definition

A branch of AI where models learn patterns from data instead of being explicitly programmed for every rule.

76

Multimodal Model

Definition

Multimodal Model is a model type or component focused on multimodal. It helps produce, transform, rank, interpret, or evaluate AI outputs.

77

Neural Network

Definition

A model composed of connected layers that transform input data into predictions or generated outputs.

78

Optimization

Definition

Improving a model, prompt, workflow, or system to meet accuracy, cost, speed, or reliability goals.

79

Overfitting

Definition

When a model memorizes training data patterns and performs poorly on new data.

80

Parameters

Definition

Learned numerical values inside a model that determine how it transforms inputs into outputs.

81

Pretraining

Definition

Large-scale initial model training on broad data before task-specific adaptation.

82

Self-Attention

Definition

Attention applied within the same sequence so each token can relate to other tokens.

83

Sequence-to-Sequence Model

Definition

Sequence-to-Sequence Model is a model type or component focused on sequence-to-sequence. It helps produce, transform, rank, interpret, or evaluate AI outputs.

84

Small Language Model

Definition

Small Language Model is a model type or component focused on small language. It helps produce, transform, rank, interpret, or evaluate AI outputs.

85

Training

Definition

The process of updating model parameters using data and an optimization objective.

86

Transformer

Definition

A neural architecture based on attention mechanisms that powers most modern LLMs.

87

Underfitting

Definition

When a model is too simple or poorly trained to capture useful patterns.

88

Variance

Definition

Sensitivity of a model to changes in training data or input conditions.

89

Weights

Definition

Model parameters that control the strength of connections between neural network components.

90

Accessible Environment

Definition

An environment where the agent's sensors can access the full state needed to choose actions.

91

Actuator

Definition

A physical or software mechanism that turns an agent's decision into an action in the environment.

92

Agent

Definition

A system or software program that perceives an environment and acts toward one or more goals.

93

Agent Function

Definition

The mapping from an agent's percept history to the action it should take next.

94

Agent Program

Definition

The implementation that runs on an architecture and produces actions from percepts or state.

95

Behavior of Agent

Definition

The actions an agent performs after receiving a particular sequence of percepts.

96

Condition-Action Rule

Definition

A rule that maps a detected condition directly to an action, often used in reflex agents.

97

Continuous Environment

Definition

An environment whose states or actions can vary across a continuous range rather than fixed options.

98

Deterministic Environment

Definition

An environment where the next state is fully determined by the current state and the agent's action.

99

Discrete Environment

Definition

An environment with a limited set of clearly separated states, actions, or time steps.

100

Dynamic Environment

Definition

An environment that can change while an agent is deciding or acting.

101

Effector

Definition

A body part, motor, actuator, or software output channel that carries out an agent's action.

102

Episodic Environment

Definition

An environment where each decision episode is independent of previous episodes.

103

Goal-Based Agent

Definition

An agent that chooses actions by considering which steps will move it closer to a desired goal.

104

Human Agent

Definition

A human viewed as an agent with sensory organs for perception and body parts for action.

105

Ideal Rational Agent

Definition

An agent that selects the action expected to maximize its performance measure using percepts and knowledge.

106

Inaccessible Environment

Definition

An environment where the agent cannot directly sense all information needed for a complete state description.

107

Internal State

Definition

The agent's stored representation of aspects of the world that are not currently observable.

108

Model-Based Reflex Agent

Definition

A reflex agent that maintains an internal model of how the world changes and how actions affect it.

109

Multi-Agent Environment

Definition

An environment containing more than one agent, where agents may cooperate, compete, or affect each other.

110

Non-Deterministic Environment

Definition

An environment where an action may lead to different possible next states even from the same current state.

111

Non-Episodic Environment

Definition

An environment where current actions can affect later situations and future rewards.

112

Observable Environment

Definition

An environment where percepts reveal the full current state relevant to the agent's decision.

113

Partially Observable Environment

Definition

An environment where the agent receives incomplete information and must reason with uncertainty.

114

PEAS

Definition

A task specification using Performance measure, Environment, Actuators, and Sensors to describe an agent problem.

115

Percept

Definition

The information an agent receives from its sensors at a particular moment.

116

Percept Sequence

Definition

The complete history of percepts an agent has received so far.

117

Percepts

Definition

The sensed inputs an agent receives from its environment.

118

Performance Measure of Agent

Definition

The success criteria used to judge how well an agent is performing its task.

119

Rational Agent

Definition

An agent that chooses actions expected to achieve the best outcome based on its knowledge and percepts.

120

Rationality

Definition

The quality of choosing sensible actions that are expected to produce useful results.

121

Robotic Agent

Definition

A physical agent that uses sensors such as cameras and effectors such as motors to act in the real world.

122

Sensor

Definition

A device or software input channel that lets an agent detect information about its environment.

123

Simple Reflex Agent

Definition

An agent that chooses actions using only the current percept and condition-action rules.

124

Single-Agent Environment

Definition

An environment in which one agent acts without strategic interaction with other agents.

125

Softbot

Definition

A software robot that acts inside digital environments such as websites, file systems, or databases.

126

Software Agent

Definition

An agent implemented as software that acts through encoded instructions, APIs, tools, or digital actions.

127

Static Environment

Definition

An environment that does not change while an agent is deciding what to do.

128

Turing Test Environment

Definition

A test setting where a human judge compares typed responses from a human and a machine.

129

Utility

Definition

A numeric preference score that represents how desirable a state or outcome is for an agent.

130

Utility-Based Agent

Definition

An agent that chooses actions by comparing the expected utility of possible outcomes.

131

World Model

Definition

An agent's representation of how the environment behaves and how actions change it.

Explore more chapters or test your knowledge with quizzes.