Back to Agentic AI Glossary
Agentic AI Glossary

Fine-Tuning & Alignment Terms

Fine-Tuning & Alignment Terms terms and explanations from the Agentic AI Glossary.

31 terms in this chapter
01

Adapter Tuning

Definition

A fine-tuning method that adds small trainable adapter layers while keeping most original model weights frozen.

02

Batch Size

Definition

The number of training examples processed together before updating model parameters.

03

Catastrophic Forgetting

Definition

When fine-tuning causes a model to lose useful abilities it learned during earlier training.

04

Checkpoint

Definition

A saved model state that can be reused, evaluated, restored, or continued during training.

05

Data Cleaning

Definition

Removing errors, duplicates, unsafe content, or irrelevant examples from training or evaluation data.

06

Data Curation

Definition

Selecting and organizing high-quality examples that teach the model the desired behavior.

07

DPO

Definition

Direct Preference Optimization, a preference-tuning method that trains a model to favor preferred responses without a separate reward model.

08

Epoch

Definition

One full pass through the training dataset during model training.

09

Fine-Tuning Evaluation

Definition

Testing a tuned model against base-model behavior, target tasks, safety cases, and regression benchmarks.

10

Full Fine-Tuning

Definition

Updating all or most model weights during fine-tuning instead of using small adapters.

11

Human Preference Data

Definition

Examples where people compare or rate outputs to teach the model preferred behavior.

12

Hyperparameter

Definition

A training configuration value, such as learning rate or batch size, chosen before training begins.

13

Instruction Tuning

Definition

Fine-tuning a model on instruction-response examples so it follows user requests better.

14

Learning Rate

Definition

Learning Rate is a training hyperparameter that controls how large each model-weight update is during optimization.

15

LoRA

Definition

Low-Rank Adaptation, a parameter-efficient fine-tuning method that trains small adapter matrices instead of all model weights.

16

Model Merge

Definition

Combining weights or adapters from multiple models to create a new model variant.

17

PEFT

Definition

Parameter-efficient fine-tuning, a family of methods that adapt models by training only a small number of parameters.

18

PPO

Definition

Proximal Policy Optimization, a reinforcement learning algorithm often discussed in RLHF training pipelines.

19

Preference Dataset

Definition

A dataset containing preferred and rejected outputs used to train alignment behavior.

20

Prefix Tuning

Definition

A parameter-efficient method that trains prefix vectors added to the model input.

21

Prompt Tuning

Definition

A parameter-efficient method that learns soft prompt embeddings instead of changing the full model.

22

QLoRA

Definition

Quantized LoRA, a memory-efficient fine-tuning method that combines quantization with LoRA adapters.

23

Reward Model

Definition

Reward Model is a model trained to score outputs so a fine-tuning or alignment process can prefer better responses.

24

RLAIF

Definition

Reinforcement Learning from AI Feedback, where AI-generated preferences help guide model alignment.

25

RLHF

Definition

Reinforcement Learning from Human Feedback, where human preferences guide model behavior after pretraining.

26

SFT

Definition

Supervised Fine-Tuning, where a model is trained on labeled examples of desired instructions and responses.

27

Supervised Fine-Tuning

Definition

Training a pretrained model on labeled prompt-response examples.

28

Synthetic Data

Definition

Artificially generated examples used for testing, evaluation, training, simulation, or privacy-preserving development.

29

Test Dataset

Definition

Held-out data used for final evaluation after training decisions are complete.

30

Training Dataset

Definition

The examples used to update model parameters during training.

31

Validation Dataset

Definition

Held-out data used during development to tune settings and detect overfitting.

Explore more chapters or test your knowledge with quizzes.