Explore highlights from Ultralytics' annual event and relive the YOLO Vision hybrid experience. We'll cover Ultralytics' YOLO11 launch, engaging panels, and more.
On September 27th, Ultralytics brought together the AI and computer vision community for our exciting annual hybrid l event, YOLO Vision 2024 (YV24). Hosted at the Google for Startups Campus in Madrid and streamed globally, the event gathered experts, developers, and enthusiasts to discuss the latest advancements in Vision AI, such as the new Ultralytics YOLO11 model. The live stream of the event has already reached over 5,400 views, with more than 10,600 impressions and 469.5 watch hours, engaging innovators around the world.
YV24 started with a warm welcome from our host, Oisin Lunny, who emphasized the importance of community and connection by saying, "I’m a great believer in the power of great ideas and great communities, and what Ultralytics has created with YOLO Vision is just that - a great community of great people with great ideas.”
In this article, we'll pinpoint the key highlights from YOLO Vision 2024, from the engaging panel discussions to fascinating real-world use cases of computer vision. We’ll also explore technical talks ranging from edge AI to hardware acceleration, as well as the networking and community-building moments that made the event a success. Whether you're interested in AI innovations, key announcements, or the future of Vision AI, this YOLO Vision 2024 event recap covers all the essential takeaways!
The product launch that had been teased before YOLO Vision 2024 was finally revealed with an announcement during the initial keynote by Glenn Jocher, Ultralytics’ Founder and CEO. Glenn introduced Ultralytics YOLO11, marking the next generation of computer vision models, which had been in development for several months. Adding to the excitement of the launch, Glenn was later interviewed on The Ravit Show and shared insights about the development of YOLO11.
During his keynote, Glenn also shared the story of the company’s journey, starting with his background in particle physics and how his fascination with understanding the universe eventually led him to machine learning and computer vision.
He explained how his early work in physics, where researchers analyzed particle interactions, was similar to object detection in computer vision. His curiosity and drive to work on cutting-edge technology ultimately led to the creation of Ultralytics YOLOv5. Throughout his talk, Glenn stressed the importance of collaboration and contributing within the open-source community and thanked developers worldwide who have provided feedback and helped improve YOLOv5 and Ultralytics YOLOv8 over time.
He then introduced the key features of Ultralytics YOLO11 and explained that it’s faster, more accurate, and more efficient than previous models. In fact, YOLO11m uses 22% fewer parameters than YOLOv8m yet delivers better accuracy on the COCO dataset, making YOLO11 perfect for real-time applications where speed and accuracy are fundamental.
Glenn emphasized the scale of the launch by saying, "We are launching 30 models in total, 25 of these are open source, with five different sizes for five different tasks. The tasks are image classification, object detection, instance segmentation, pose estimation, and oriented bounding boxes." On the enterprise side, he announced that next month, robust models trained on a proprietary dataset of 1 million images would be available. Needless to say, the announcement kicked off the event on a high note, leaving attendees eager to learn more about YOLO11’s potential to innovate across fields like manufacturing and self-driving cars.
The panel discussions, moderated by Oisin Lunny, at YOLO Vision 2024 provided a range of insights into AI, computer vision, and community building.
The first panel featured Glenn Jocher, Jing Qiu (a key figure in the development of YOLO models at Ultralytics), and Ao Wang from Tsinghua University, who co-authored YOLOv10. The panel discussed recent developments in generative AI and computer vision, focusing on their similarities, differences, and the impact each field has had on the other. Despite the recent rise of large language models (LLMs), the panel noted that traditional computer vision is still essential for specific tasks in industries like healthcare.
The next panel tackled the challenges women face in AI leadership, with speakers: Ultralytics' Director of Growth Paula Derrenger, former CPO and COO in SaaS Bruna de Guimarães, Chapter lead for Latinas in Tech Madrid Mariana Hernandez, and Founder of Dare to Data Christina Stathopoulous sharing their experiences, while discussing the importance of mentorship and the need for women to take proactive steps in seeking leadership roles. Hernandez advised, "Be proactive, don’t wait for things to happen for you," and encouraged women in the audience to assert themselves and actively pursue opportunities. The panel also discussed the value of creating more supportive work environments.
The final panel explored how building strong communities can foster innovation in AI. Burhan Qaddoumi, Harpreet Sahota, and Bart Farrell discussed ways to engage with technical audiences, both online and at in-person events. Farrell's insight, "You got to meet them where they are at," empathized with the importance of connecting with community members on their terms to encourage collaboration and shared learning.
Several talks at YV24 shed light on how YOLO models are being applied to solve real-world challenges in various industries. Jim Griffin, host of the AI Master Group podcast, spoke about a project that uses YOLOv8 models to monitor shark movements along the California coastline through drone surveillance. The system alerts lifeguards, surf shop owners, and parents, ensuring beachgoers' safety by detecting sharks from 200 feet above the ocean. Griffin explained that the real challenge wasn’t the AI model itself but the extensive drone flights and data collection needed to train the model.
Similarly, David Scott from The Main Branch discussed the expansion of computer vision from simple object detection to behavior analysis. His talk featured real-world applications like tracking cattle behavior and identifying suspicious activities in retail stores. Scott shared how YOLOv8 can be used to monitor cattle health by analyzing specific behaviors, such as eating, drinking, and walking.
Furthermore, a particularly heartfelt keynote came from Ousman Umar of NASCO Feeding Minds, where he shared how his organization is changing lives by providing IT education in Ghana. His foundation has set up 17 ICT centers, training over 65,000 students, with the goal of creating local tech jobs to help address issues like illegal immigration. Umar's powerful story conveyed how education and technology together can drive lasting change in underserved communities.
YV24 also featured different talks focused on how AI and hardware are coming together to spark new ideas. Experts from companies like Intel, Sony, and NVIDIA addressed deploying YOLO models on edge devices and optimizing performance. Dmitriy Pastushenkov and Adrian Boguszewski from Intel outlined how their hardware supports YOLO models across NPU, CPU, and GPU, while Sony's Amir Servi and Wei Tang shared how YOLO integrates with the AITRIOS platform for efficient edge AI deployment. Guy Dahan from NVIDIA talked about using their GPU architecture to improve YOLO model inference.
Other companies like Qualcomm, Hugging Face, and Lightning AI also showcased how their platforms make it easier for developers to integrate and deploy YOLO models. Devang Aggarwal from Qualcomm presented how models like YOLOv8 can be optimized for Snapdragon devices through the Qualcomm AI Hub.
Similarly, Pavel Lakubovskii from Hugging Face described how their open-source tools enable seamless integration of models like YOLOv8 into various workflows, while Luca Antiga from Lightning AI walked us through how developers can easily incorporate models like YOLOv8 at the code level for quicker prototyping and iterations.
In the week leading up to YV24, the Ultralytics team gathered in Madrid for a mix of workshops, collaborative meetings, and offsite activities. These activities went beyond work, nurturing stronger relationships, and creating a positive atmosphere ahead of the event. Wrapping up with a celebratory afterparty, attendees and speakers had the opportunity to network, share key takeaways, and explore future collaborations. The combination of teamwork and camaraderie made YV24 a professional success and an all-round memorable experience.
YV24 brought together innovation, collaboration, and a look at the future of computer vision. With the launch of YOLO11, engaging panels, and discussions on AI hardware and edge solutions, the event focused on how Vision AI can make a difference and how technology is changing to keep up with advancements in AI. It also strengthened connections within the community. Experts and enthusiasts shared ideas and explored the potential of computer vision and YOLO. The event wrapped up with a fun quiz session, where Ultralytics hoodies were up for grabs, leaving everyone excited for more innovations like YOLO11 in the future.
Visit our GitHub repository and connect with our thriving community to learn more about AI. See how Vision AI is redefining innovation in sectors like healthcare and agriculture. 🚀
Begin your journey with the future of machine learning