Verificação verde
Link copiado para a área de transferência

Custom-training Ultralytics YOLO models on Lightning AI

Explore how Lightning AI, showcased at YOLO Vision 2024, simplifies scalable vision AI development with faster model training, deployment, and collaboration.

Whether you're an experienced AI developer or just starting to explore vision AI, having a reliable environment to play around and experiment with computer vision models like Ultralytrics YOLO11 is key. An environment refers to the tools, resources, and infrastructure needed to design, test, and deploy AI models efficiently. 

While several online platforms offer different AI tools, many do not provide a unified environment for the entire AI lifecycle, from data preparation to model deployment. This is where Lightning AI, an all-in-one platform for AI development, steps in to streamline the process from data preparation to deployment.

The relevance of making AI development easier was showcased at YOLO Vision 2024 (YV24), an annual hybrid event hosted by Ultralytics that focused on advancements in AI and computer vision. Luca Antiga, CTO of Lightning AI, delivered a keynote talk titled 'Going YOLO on Ligh†tning Studios,' where he broke down how to train Ultralytics YOLO models quickly, smoothly, and without getting involved in the technical complexities using Lightning AI.

In this article, we will dive into the key takeaways from Luca’s talk, covering everything from real-world computer vision applications to live demos on training and deploying Ultralytics YOLO models with Lightning AI. Let’s get started!

Using Lightning AI and Ultralytics YOLO to simplify AI development

Luca began his keynote by sharing his thoughts and appreciation for the influence of YOLO models across various industries. He highlighted how YOLO models can be applied in sectors like manufacturing and agriculture. He said, 'I appreciate the impact YOLO has had on the community of builders - people who need to solve actual, practical problems - this is very close to me.'

Connecting this to the growing interest in AI training, he introduced Lightning AI, a platform designed to make AI model development faster, simpler, and more accessible for everyone. It’s especially useful for supporting iterative advancements in AI, helping developers refine and improve models.

Fig 1. Luca Antiga remotely presenting about Lightning Studios at YV24.

He also pointed out that Lightning AI is similar to PyTorch Lightning, a framework that simplifies the process of training AI models. However, where it differs is that Lightning AI is a more comprehensive platform that provides a broader set of tools and capabilities for the entire AI development process, not just training AI models. 

A vital component of Lightning AI is Lightning Studios, which offers an intuitive workspace to design, train, and deploy AI models, making the entire workflow seamless and efficient. You can think of Lightning Studios as a reproducible development environment for AI that runs on the cloud. For example, it offers a Jupyter Notebook-like environment that can be duplicated and shared with another developer, helping to improve collaboration. 

Luca then elaborated on Lightning Studios’ advantages, saying, “Replicating your environment is not a problem anymore. If you need to change from a CPU [Central Processing Unit] machine to a GPU [Graphics Processing Unit] machine or launch training across a thousand machines, your environment will be persistent.”

Setting up Lightning Studios for training and development

Next, Luca demonstrated how quickly you can get started with Lightning Studios. With just a few clicks, you can open a new studio and have access to tools and environments like Jupyter Notebooks and VS Code, all set up and ready for coding. He showcased how easy it was to switch between different machines. If the task you are working on demands more power, you can easily switch from a CPU to a more powerful GPU. The GPU will remain active only while in use; otherwise, it will go into sleep mode, saving your credits.

Luca also mentioned the benefits of using Studio Templates. They are AI coding environments that are pre-made by the community, and you can use them without having to set anything up. Setting up an environment for AI projects can be time-consuming, and Studio Templates can help increase productivity. These environments come preloaded with everything needed for AI projects, like installed dependencies, model weights, data, code, etc. 

Fig 2. Luca explaining what Studio Templates are.

Training Ultralytics YOLO models on Lightning Studios

Luca then moved on to the live demo, highlighting how you can use Lightning Studio to train Ultralytics YOLO models. He opened a Studio Template, which already had all the dependencies installed, and spun up a machine with four GPUs to speed up the training process. With respect to data, he said that you can choose to store data directly on the machine or stream it from the cloud, making the training process faster and more efficient.

Within a few seconds, the machine was ready, and Luca quickly kicked off the training session. During the demo, a minor issue caused the machine to stop unexpectedly, but Lightning Studios seamlessly resumed from where it left off, making sure no progress was lost. Luca pointed out how this reliability supports smooth workflows, even in the face of unexpected interruptions.

Going on with the demo, he showed how easy it is to monitor training progress using TensorBoard, a tool for visualizing machine learning metrics in real time. Lightning Studio makes this even simpler by automatically generating URLs that let you or your teammates in the same workspace access TensorBoard views without any extra setup. This streamlines collaboration and keeps everyone on the same page. 

Fig 3. A flowchart on training Ultralytics YOLO models on Lightning Studios. Image by author.

Deploying Ultralytics YOLO Models with Lit Serve

After the demo, Luca shifted the focus of the talk to a new project, LitServe, recently launched by Lightning AI. LitServe simplifies the process of taking a trained model and turning it into a scalable service that others can use, eliminating the need for complex deployment pipelines. It is designed to handle everything from packaging the model to deploying it with minimal effort.

To show this in real time, Luca gave the audience a quick demo using a pre-trained Ultralytics YOLOv8 model. He was able to create a simple API to handle incoming requests and return image predictions in a few seconds. This means anyone can ping this API with an image and receive results for computer vision tasks like object detection almost instantly. Behind the scenes, the Ultralytics YOLOv8 model is deployed as a service, efficiently handling requests, processing images, and delivering predictions with minimal latency.

Fig 4. Luca showcasing Lightning AI’s LitServe during YV24.

He ran an inference on an image of pizza, and Ultralytics YOLOv8 successfully identified objects such as the pizza, a spoon, and a dining table. He explained that while the first request takes slightly longer due to a 'cold start,' subsequent requests are much faster once the system is warmed up.

Luca then asked, 'What if I want to expose this to the outside world?' He outlined how the API Builder plugin makes turning your model into a live, production-ready service simple. With features like custom domains, added security, and seamless integration, you can easily make your model accessible to anyone.

Key advantages of using Lightning Studios

Concluding his talk, Luca touched on the scalability and flexibility of Lightning Studio for AI development. He mentioned how the platform can train models across multiple machines, scaling up to 10,000 nodes, with fault-tolerant training that automatically resumes after any interruptions. 

For instance, if a training job on a GPU cluster is interrupted due to a hardware issue or a server reboot, Lightning Studios makes sure the process resumes exactly where it left off. This makes it ideal for large-scale AI projects, like training deep learning models on massive datasets such as ImageNet or COCO.

Here are some other key benefits of Lightning Studios that Luca spoke about:

  • Free monthly GPU credits: Users are provided 15 free GPU credits each month, which automatically refills, ensuring you can experiment and develop without added costs.
  • Enhanced collaboration: Lightning Studio’s shared team spaces and reproducible environments enable team members to work together seamlessly, ensuring consistency and efficiency across projects.
  • Flexible instance options: It gives you the flexibility to choose between interruptible and non-interruptible instances, allowing users to save costs on GPU machines with interruptible options.
  • Integration with existing tools: The platform integrates with remote development tools like SSH (Secure Socket Shell) and VS Code, providing flexibility to work locally or in the cloud.

Principais conclusões

Luca’s keynote at YV24 highlighted how AI, combined with tools like Ultralytics YOLO models and Lightning AI, is changing how we solve real-world problems. They make it easier for developers to train and deploy models that have been designed to tackle specific issues in a range of industries.

He illustrated how Lightning Studios makes the entire development process faster and more accessible, allowing developers to create powerful solutions easily. At the core of cutting-edge platforms like Lightning AI, computer vision models are transforming how AI solutions handle challenges. In particular, with the latest Ultralytics YOLO11 model, developers can build solutions that make a meaningful impact.

Join our community to stay updated on AI and its practical uses. Check out our GitHub repository to explore innovations in sectors like AI in self-driving cars and computer vision in healthcare.

Logótipo do FacebookLogótipo do TwitterLogótipo do LinkedInSímbolo de ligação de cópia

Ler mais nesta categoria

Vamos construir juntos o futuro
da IA!

Começa a tua viagem com o futuro da aprendizagem automática