Kiểm tra màu xanh lá cây
Liên kết được sao chép vào khay nhớ tạm

Ultralytics YOLO11 on NVIDIA Jetson Orin Nano Super: fast and efficient

Explore how deploying Ultralytics YOLO11 on NVIDIA Jetson Orin Nano Super delivers impressive benchmarks and GPU-accelerated performance for advanced AI applications.

The NVIDIA Jetson Orin Nano Super Developer Kit, launched on December 17, 2024, is a compact but powerful generative AI supercomputer designed to bring advanced capabilities to edge computing. It facilitates real-time processing and eliminates the need for cloud computing. The NVIDIA Jetson Orin Nano Super lets developers build affordable intelligent systems that work efficiently in local environments.

When paired with Ultralytics YOLO models like Ultralytics YOLO11, the Jetson Orin Nano Super can handle a vast range of Vision AI applications on the edge. In particular, YOLO11 is a computer vision model known for its speed and accuracy in tasks like object detection, object tracking, and instance segmentation. 

Combining YOLO11’s abilities with the kit’s robust GPU (Graphics Processing Unit) and support for frameworks like PyTorch, ONNX, and NVIDIA TensorRT enables high-performance deployments. This combination provides developers with an efficient solution for creating AI applications, from object detection in robotics to real-time object tracking in smart spaces and retail systems.

In this article, we’ll look at the NVIDIA Jetson Orin Nano Super Developer Kit, how it works with Ultralytics YOLO11 for edge AI, its performance benchmarks, real-world applications, and how it can help developers build Vision AI projects. Let’s get started!

What is the NVIDIA Jetson Orin Nano Super Developer Kit?

The NVIDIA Jetson Orin Nano Super Developer Kit is a compact, yet powerful computer that redefines generative AI for small edge devices. It delivers up to 67 TOPS (trillions of operations per second) of AI performance, making it ideal for developers, students, and hobbyists working on advanced AI projects.

Fig 1. An overview of the NVIDIA Jetson Orin Nano Super.

Sau đây là một số tính năng chính của nó:

  • GPU performance: The device is built on the NVIDIA Ampere architecture GPU, which includes 1,024 CUDA cores and 32 Tensor Cores. CUDA cores process many tasks simultaneously, speeding up complex computations, while Tensor Cores are specialized for AI tasks like deep learning. 
  • Powerful CPU: It features a 6-core Arm Cortex-A78AE processor, designed to balance speed and efficiency. The device can handle multiple tasks smoothly while keeping energy usage low. This is important for systems running locally without access to large power sources.
  • Efficient memory: The kit comes with 8GB of LPDDR5 (Low Power Double Data Rate 5) memory. LPDDR5 is a type of RAM (Random Access Memory) optimized for speed and energy efficiency, allowing the device to handle large datasets and real-time processing without consuming excessive power.
  • Connectivity options: It includes USB 3.2 ports for quick data transfers, a Gigabit Ethernet port for strong network connections, and camera interfaces for integrating sensors or cameras
  • AI development tools: The Jetson Orin Nano Super works with the NVIDIA JetPack SDK, which provides tools like CUDA for faster computing and TensorRT for optimizing AI models. These tools make it easier for developers to build and deploy AI applications quickly and efficiently.

Performance benchmarks: Jetson Orin Nano Super Vs. Orin NX 16GB

If you’re familiar with NVIDIA’s work, you might be wondering how this new release compares with the existing NVIDIA Jetson Orin NX 16GB (without super mode). While the Jetson Orin NX offers higher overall capabilities, the Jetson Orin Nano Super Developer Kit provides impressive performance at a fraction of the cost. 

Fig 2. A look at the NVIDIA Jetson Orin ecosystem.

Here’s a quick overview:

  • AI performance: Jetson Orin Nano Super delivers up to 67 TOPS, which is great for most edge AI tasks, while Jetson Orin NX offers up to 100 TOPS for more demanding applications.
  • Memory: Jetson Orin Nano Super includes 8GB LPDDR5, enough for real-time tasks, while Orin NX doubles it to 16GB for larger workloads.
  • Power efficiency: Jetson Orin Nano Super is more energy-efficient, and configurable between 7W and 25W, compared with the Jetson Orin NX’s higher power demands.
  • GPU: Both share the NVIDIA Ampere architecture with 1,024 CUDA cores and 32 Tensor Cores for robust GPU performance.

YOLO11 with Jetson Orin Nano Super: Bringing vision AI to the edge

Now that we have a better understanding of the Jetson Orin Nano Super, let’s take a look at how YOLO11 can step in to bring Vision AI capabilities to the edge. Ultralytics YOLO models, including YOLO11, come with versatile modes like train, predict, and export, making them adaptable to a variety of AI workflows. 

For example, in the training mode, Ultralytics YOLO models can be fine-tuned and trained on custom datasets for specific applications, such as detecting unique objects or optimizing for specific environments. Similarly, the prediction mode is designed for inference, enabling real-time computer vision tasks. Finally, the export mode can be used to convert models into formats optimized for deployment.

Fig 3. Ultralytics YOLO models support various features and modes.

YOLO11 in export mode supports a range of model deployment options, including among others:

  • NVIDIA TensorRT: This format is optimized for NVIDIA GPUs, offering high-performance and low-latency inference on the Jetson Orin Nano Super.
  • ONNX (Open Neural Network Exchange): It ensures compatibility across various platforms, making it versatile for different hardware and software ecosystems.
  • TorchScript: This format is ideal for PyTorch-based applications, helping with seamless integration into PyTorch workflows.
  • TFLite (TensorFlow Lite): A format designed for lightweight AI deployments, making it perfect for mobile and embedded systems.

Using these deployment formats, developers can take full advantage of the Jetson Orin Nano Super’s hardware to run YOLO11 for real-time applications like smart spaces, robotics, and retail automation. 

Benchmarking YOLO11 on the NVIDIA Jetson Orin Nano Super

Next, to get a better idea of how fast YOLO11 can run on the NVIDIA Jetson Orin Nano Super, let’s explore its impressive performance and benchmarks using GPU-accelerated export formats like PyTorch, ONNX, and TensorRT. These tests reveal that the Jetson Orin Nano Super achieves inference times with YOLO11 models that are comparable with - and occasionally surpass - the existing Jetson Orin NX 16GB (without super mode).

Fig 4. Benchmarking YOLO11 on NVIDIA Jetson Orin Nano Super.

What makes this even more remarkable is the Jetson Orin Nano Super’s affordability. Offering such performance at less than half the price of the Jetson Orin NX 16GB, it provides exceptional value for developers building high-performance YOLO11 applications. This combination of cost and performance makes the Jetson Orin Nano Super an excellent choice for real-time Vision AI tasks at the edge.

Fig 5. Benchmarking YOLO11 on Jetson Orin NX 16GB.

Get hands-on with YOLO11 and the NVIDIA Jetson Orin Nano Super

If you’re excited about getting started with deploying YOLO11 on the Jetson Orin Nano Super, there’s good news - it’s a straightforward process. After flashing your device with the NVIDIA JetPack SDK, you can either use a pre-built Docker image for quick setup or manually install the necessary packages. 

For those looking for a faster and more seamless integration, the updated JetPack 6 Docker container is the ideal solution. A Docker container is a lightweight, portable environment that includes all the necessary tools and dependencies to run specific software. 

The Ultralytics container, optimized for JetPack 6.1, comes preloaded with CUDA 12.6, TensorRT 10.3, and essential tools like PyTorch and TorchVision, all tailored for Jetson’s ARM64 architecture. By using this container, developers can save time on setup and focus on building and optimizing their Vision AI applications with YOLO11.

Applications of YOLO11 on the NVIDIA Jetson Orin Nano Super

For those looking for inspiration for your next AI project, there’s potential for edge-based computer vision applications all around us. 

In everyday life, edge AI is redefining smart spaces by enabling systems to detect and track objects in real time, all without relying on cloud processing. Whether it’s monitoring traffic in a bustling city or identifying unusual activity in public spaces, edge Vision AI is boosting security and efficiency.

Retailers are also tapping into edge AI and computer vision. From automated inventory checks to theft prevention, models like YOLO11 make it possible for businesses to deploy real-time solutions directly in stores. 

Similarly, when it comes to AI in healthcare, edge-based monitoring ensures patient safety, detects anomalies, and maintains compliance - all without delays caused by cloud dependency. With tools like the Jetson Orin Nano Super and YOLO11, the future of Vision AI is unfolding right at the edge, where it’s needed most.

Những điểm chính

Deploying Ultralytics YOLO models like YOLO11 on the NVIDIA Jetson Orin Nano Super Developer Kit offers a reliable and efficient solution for edge AI applications. With robust GPU performance, seamless support for PyTorch, ONNX, and TensorRT, and impressive benchmarks, it’s well-suited for real-time computer vision tasks like object detection and tracking. 

Innovations and collaborations in cutting-edge technologies like Vision AI and hardware acceleration are transforming how we work, empowering developers to build scalable, high-performance solutions at the edge. As AI advances, tools like YOLO11 and the Jetson Orin Nano Super are making it easier than ever to bring intelligent, real-time solutions to life.

Curious about AI? Visit our GitHub repository to explore our contributions and engage with our community. See how we’re using AI to make an impact in industries like agriculture and healthcare.

Logo FacebookBiểu trưng TwitterBiểu trưng LinkedInBiểu tượng sao chép liên kết

Đọc thêm trong danh mục này

Hãy xây dựng tương lai
của AI cùng nhau!

Bắt đầu hành trình của bạn với tương lai của machine learning