Green check
Link copied to clipboard

Export and Optimize Ultralytics YOLOv8 for Inference on Intel OpenVINO

Optimize your Ultralytics YOLOv8 model for inference using OpenVINO. Follow our guide to convert PyTorch models to ONNX and optimize them for real-time applications.

In this blogpost, we'll be taking a look at  how you can export and optimize your pre-trained or custom-trained Ultralytics YOLOv8 model for inference using OpenVINO. If you're using an Intel-based system, whether it’s a CPU or GPU, this guide will show you how to significantly speed up your model with minimal effort.

Why Optimize YOLOv8 with OpenVINO?

Optimizing your YOLOv8 model with OpenVINO can provide up to a 3x speed increase on inference tasks, particularly if you're running an Intel CPU. This performance boost can make a huge difference in real-time applications, from object detection to segmentation and security systems.

Steps to Export and Optimize Your YOLOv8 Model

Understanding the Process

First things first, let’s break down the process. We're going to convert a PyTorch model to ONNX and then optimize it using OpenVINO. This process involves a few straightforward steps and can be applied to various models and formats including TensorFlow, PyTorch, Caffe, and ONNX.

Exporting the Model

Jumping into the Ultralytics documentation, we find that exporting a YOLOv8 model involves using the export method from the Ultralytics framework. This method allows us to convert our model from PyTorch to ONNX, and finally, optimize it for OpenVINO. The result is a model that runs significantly faster, leveraging Intel's powerful hardware.

Installing Dependencies

Before running the export script, you’ll need to ensure that all necessary dependencies are installed. These include the Ultralytics library, ONNX, and OpenVINO. Installing these packages is a simple process that can be done via pip, the Python package installer.

Running the Export Script

Once your environment is set up, you can run your export script. This script will convert your PyTorch model to ONNX and then to OpenVINO. The process is straightforward and involves calling a single function to handle the export. The Ultralytics framework makes it easy to convert and optimize your models, ensuring you get the best performance with minimal hassle.

Fig 1. Nicolai Nielsen outlining how to run the export script.

Comparing Performance

After exporting, it’s essential to compare the performance of the original and optimized models. By benchmarking the inference time of both models, you can clearly see the performance gains. Typically, the OpenVINO model will show a significant reduction in inference time compared to the original PyTorch model. This is especially true for larger models where the performance boost is most noticeable.

Real-World Application and Benefits

Optimizing YOLOv8 models with OpenVINO is particularly beneficial for applications requiring real-time processing. Here are a few examples:

  • Security Systems: Real-time object detection can alert security personnel instantly, enhancing safety and responsiveness.
  • Automated Vehicles: Faster inference speeds improve the responsiveness of autonomous driving systems, making them safer and more reliable.
  • Healthcare: Quick image processing for diagnostic tools can save lives by providing faster results, allowing for timely interventions.

By implementing these optimizations, you not only improve performance but also enhance the reliability and efficiency of your applications. This can lead to better user experiences, increased productivity, and more innovative solutions.

Wrapping Up

Exporting and optimizing a YOLOv8 model for OpenVINO is a powerful way to leverage Intel hardware for faster and more efficient AI applications. With just a few simple steps, you can transform your model’s performance and apply it to real-world scenarios effectively.

Make sure to check out more tutorials and guides from Ultralytics to keep enhancing your AI projects. Visit our GitHub repository and join the Ultralytics community for more insights and updates. Let’s innovate together!

Remember, optimizing your models is not just about speed—it's about unlocking new possibilities and ensuring your AI solutions are robust, efficient, and ready for the future. 

Facebook logoTwitter logoLinkedIn logoCopy-link symbol

Read more in this category

Let’s build the future
of AI together!

Begin your journey with the future of machine learning