Discover YOLOv5 v7.0 with new instance segmentation models, outperforming SOTA benchmarks for top AI accuracy and speed. Join our community.
YOLOv5 v7.0, the latest version of our AI architecture, is out, and we are thrilled to introduce our new instance segmentation models!
While working on this latest release, we’ve kept two objectives front and center. The first was our mission to make AI easy, and the second was our objective to redefine what “state-of-the-art” truly means.
So, with significant improvements, fixes, and upgrades, we’ve done just that. Keeping the same simple workflows as our existing YOLOv5 object detection models, it’s now easier than ever to train, validate and deploy your models with YOLOv5 v7.0. On top of this, we’ve surpassed all SOTA benchmarks, effectively making YOLOv5 the fastest and most accurate in the world.
As this is our first release of segmentation models, we are immensely proud of this milestone. We owe many thanks to our dedicated community and contributors, who have helped make this release possible.
So, let's get started with the YOLOv5 v7.0 release notes!
Here’s what’s been updated in YOLOv5 since our last release of YOLOv5 v6.2 in August 2022.
We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google Colab Pro notebooks for easy reproducibility.
YOLOv5 segmentation training supports auto-download COCO128-seg segmentation dataset with --data coco128-seg.yaml argument and manual download of COCO-segments dataset with bash data/scripts/get_coco.sh --train --val --segments and then python train.py --data coco.yaml.
python segment/train.py --model yolov5s-seg.pt --data coco128-seg.yaml --epochs 5 --img 640
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --model yolov5s-seg.pt --data coco128-seg.yaml --epochs 5 --img 640 --device 0,1,2,3
Validate YOLOv5m-seg accuracy on ImageNet-1k dataset:
bash data/scripts/get_coco.sh --val --segments # download COCO val segments split (780MB, 5000 images) python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # validate
Use pretrained YOLOv5m-seg to predict bus.jpg:
python segment/predict.py --weights yolov5m-seg.pt --data data/images/bus.jpg
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5m-seg.pt') # load from PyTorch Hub (WARNING: inference not yet supported)
Export YOLOv5s-seg model to ONNX and TensorRT:
python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0
Have any questions? Ask the Ultralytics forum, raise an issue, or submit a PR on the repo. You can also get started with our YOLOv5 segmentation Colab notebook for quickstart tutorials.
Begin your journey with the future of machine learning