Glossary

Quantization-Aware Training (QAT)

Optimize AI models for edge devices with Quantization-Aware Training (QAT), ensuring high accuracy and efficiency in resource-limited environments.

Train YOLO models simply
with Ultralytics HUB

Learn more

Quantization-Aware Training (QAT) is a powerful technique used to optimize deep learning (DL) models, like Ultralytics YOLO models, for deployment on devices with limited computational resources, such as mobile phones or embedded systems. Standard models often use high-precision numbers (like 32-bit floating-point or FP32) for calculations, which demand significant processing power and memory. QAT aims to reduce this demand by preparing the model during the training phase to perform well even when using lower-precision numbers (e.g., 8-bit integers or INT8), thereby bridging the gap between high accuracy and efficient performance on edge devices. This optimization is crucial for enabling complex AI tasks directly on hardware like smartphones or IoT sensors.

How Quantization-Aware Training Works

Unlike methods that quantize a model after it has been fully trained, QAT integrates the simulation of quantization effects directly into the training process. It introduces operations called 'fake quantization' nodes within the model architecture during training. These nodes mimic the effect of lower precision (e.g., INT8 precision) on model weights and activations during the forward pass, rounding values as they would be in a truly quantized model. However, during the backward pass (where the model learns via backpropagation), gradients are typically calculated and updates applied using standard high-precision floating-point numbers. This allows the model's parameters to adapt and learn to be robust to the precision loss that will occur during actual quantized inference. By "seeing" the effects of quantization during training, the model minimizes the accuracy drop often associated with deploying models in low-precision formats, a key aspect discussed in model optimization strategies. Frameworks like TensorFlow Lite and PyTorch provide tools to implement QAT.

Real-World Applications of QAT

Quantization-Aware Training is vital for deploying sophisticated AI models in resource-constrained environments where efficiency is key.

  1. On-Device Computer Vision: Running complex computer vision models like Ultralytics YOLOv8 directly on smartphones for applications like real-time object detection in augmented reality apps or image classification within photo management tools. QAT allows these models to run efficiently without significant battery drain or latency.
  2. Edge AI in Automotive and Robotics: Deploying models for tasks like pedestrian detection or lane keeping assist in autonomous vehicles or for object manipulation in robotics. QAT enables these models to run on specialized hardware like Google Edge TPUs or NVIDIA Jetson, ensuring low inference latency for critical real-time decisions. This is crucial for applications like security alarm systems or parking management.

Ultralytics supports exporting models to various formats like ONNX, TensorRT, and TFLite, which are compatible with QAT workflows, enabling efficient deployment across diverse hardware. You can manage and deploy your QAT-optimized models using platforms like Ultralytics HUB. Evaluating model performance using relevant metrics after QAT is essential to ensure accuracy requirements are met.

Read all