Learn how to train, validate, predict, export and benchmark with Ultralytics YOLO Models!
Let’s dive into the world of Ultralytics and explore the different modes available for different YOLO models. Whether you're training custom object detection models or working on segmentation, understanding these modes is a crucial step. Let's jump right in!
Through the Ultralytics documentation, you'll find several modes that you can utilize for your models, whether it be to train, validate, predict, export, benchmark or track. Each of these modes serves a unique purpose and helps you optimize your model's performance and deployment.
First up let’s look at the train mode. This is where you build and refine your model. You can find detailed instructions and video guides in the documentation, making it easy to get started with training your custom models.
Model training involves providing a model with a new dataset, allowing it to learn various patterns. Once trained, the model can be used in real-time to detect new objects it has been trained on. Before starting the training process, it's essential to annotate your dataset in YOLO format.
Next, let's dive into the validate mode. Validation is essential for tuning hyperparameters and ensuring your model performs well. Ultralytics provides a variety of validation options, including automated settings, multi-metric support, and compatibility with the Python API. You can even run validation directly through the command line interface (CLI) with the command below.
Validation is critical for:
Ultralytics also provides user examples that you can copy and paste into your Python scripts. These examples include parameters like image size, batch size, device (CPU or GPU), and intersection over union (IoU).
Once your model is trained and validated, it's time to make predictions. The Predict mode allows you to run inference on new data and see your model in action. This mode is perfect for testing your model's performance on real-world data.
With the python code snippet below you’ll be able to run predictions on your images!
After validating and predicting, you may want to deploy your model. The export mode enables you to convert your model into various formats, such as ONNX or TensorRT, making it easier to deploy across different platforms.
Finally, we have the benchmark mode. Benchmarking is essential for evaluating your model's performance in various scenarios. This mode helps you make informed decisions about resource allocation, optimization, and cost efficiency.
To run a benchmark, you can use the provided user examples in the documentation. These examples cover key metrics and export formats, including ONNX and TensorRT. You can also specify parameters like integer quantization (INT8) or floating-point quantization (FP16) to see how different settings impact performance.
Let’s look at a real-world example of benchmarking. When we benchmark our PyTorch model, we notice an inference speed of 68 milliseconds on an RTX 3070 GPU. After exporting to TorchScript, the inference speed drops to 4 milliseconds, showcasing a significant improvement.
For ONNX models, we achieve an inference speed of 21 milliseconds. Testing these models on a CPU (an Intel i9 13th generation), we see varying results. TorchScript runs at 115 milliseconds, while ONNX performs better at 84 milliseconds. Finally, OpenVINO optimized for Intel hardware achieves a blazing 23 milliseconds.
Benchmarking demonstrates how different hardware and export formats can impact your model's performance. It's crucial to benchmark your models, especially if you plan to deploy them on custom hardware or edge devices. This process ensures your model is optimized for the target environment, providing the best performance possible.
In summary, the modes in Ultralytics documentation are powerful tools for training, validating, predicting, exporting, and benchmarking your YOLO models. Each mode plays a vital role in optimizing your model and preparing it for deployment.
Don't forget to explore and join our community and try out the provided code snippets in your projects. With these tools, you can create high-performing models and ensure they run efficiently in any environment.
Begin your journey with the future of machine learning