X
Ultralytics YOLOv8.2 ReleaseUltralytics YOLOv8.2 ReleaseUltralytics YOLOv8.2 Release Arrow
Green check
Link copied to clipboard

Introducing Instance Segmentation in YOLOv5 v7.0

Discover YOLOv5 v7.0 with new instance segmentation models, outperforming SOTA benchmarks for top AI accuracy and speed. Join our community.

Facebook logoTwitter logoLinkedIn logoCopy-link symbol

YOLOv5 v7.0, the latest version of our AI architecture, is out, and we are thrilled to introduce our new instance segmentation models!

While working on this latest release, we’ve kept two objectives front and center. The first was our mission to make AI easy, and the second was our objective to redefine what “state-of-the-art” truly means.

So, with significant improvements, fixes, and upgrades, we’ve done just that. Keeping the same simple workflows as our existing YOLOv5 object detection models, it’s now easier than ever to train, validate and deploy your models with YOLOv5 v7.0. On top of this, we’ve surpassed all SOTA benchmarks, effectively making YOLOv5 the fastest and most accurate in the world.

As this is our first release of segmentation models, we are immensely proud of this milestone. We owe many thanks to our dedicated community and contributors, who have helped make this release possible.  

Ultralytics YOLOv5 v7.0 SOTA Realtime Instance Segmentation

So, let's get started with the YOLOv5 v7.0 release notes!

Important YOLOv5 Updates

Here’s what’s been updated in YOLOv5 since our last release of YOLOv5 v6.2 in August 2022.

  • Segmentation Models ⭐ NEW: SOTA YOLOv5-seg COCO-pretrained segmentation models are now available for the first time (#9052 by @glenn-jocher, @AyushExel, and @Laughing-q)
  • PaddlePaddle Export: Export any YOLOv5 model (cls, seg, det) to Paddle format with python export.py --include paddle #9459 by @glenn-jocher)
  • YOLOv5 AutoCache: Use python train.py --cache ram will now scan available memory and compare against predicted dataset RAM usage. This reduces risk in caching and should help improve adoption of the dataset caching feature, which can significantly speed up training. (#10027 by @glenn-jocher)
  • Comet Logging and Visualization Integration: Free forever, Comet lets you save YOLOv5 models, resume training, and interactively visualize and debug predictions. (#9232 by @DN6)

New Segmentation Checkpoints

We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google Colab Pro notebooks for easy reproducibility.

  • All checkpoints are trained to 300 epochs with SGD optimizer with lr0=0.01 and weight_decay=5e-5 at image size 640 and all default settings. All runs are logged here.
  • Accuracy values are for single-model single-scale on COCO dataset. Reproduce by python segment/val.py --data coco.yaml --weights yolov5s-seg.pt
  • Speed averaged over 100 inference images using a Colab Pro A100 High-RAM instance. Values indicate inference speed only (NMS adds about 1ms per image). Reproduce by python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1
  • Export to ONNX at FP32 and TensorRT at FP16 done with export.py. Reproduce by python export.py --weights yolov5s-seg.pt --include engine --device 0 --half

New Segmentation Usage Examples

Train

YOLOv5 segmentation training supports auto-download COCO128-seg segmentation dataset with --data coco128-seg.yaml argument and manual download of COCO-segments dataset with bash data/scripts/get_coco.sh --train --val --segments and then python train.py --data coco.yaml.

Single-GPU

python segment/train.py --model yolov5s-seg.pt --data coco128-seg.yaml --epochs 5 --img 640

Multi-GPU DDP

python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --model yolov5s-seg.pt --data coco128-seg.yaml --epochs 5 --img 640 --device 0,1,2,3

Val

Validate YOLOv5m-seg accuracy on ImageNet-1k dataset:

bash data/scripts/get_coco.sh --val --segments  # download COCO val segments split (780MB, 5000 images) python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640  # validate

Predict

Use pretrained YOLOv5m-seg to predict bus.jpg:

python segment/predict.py --weights yolov5m-seg.pt --data data/images/bus.jpg

model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5m-seg.pt')  # load from PyTorch Hub (WARNING: inference not yet supported)

Ultralytics YOLOv5 v7.0 Instance Segmentation


Export

Export YOLOv5s-seg model to ONNX and TensorRT:

python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0

Ultralytics YOLOv5 v7.0 Instance Segmentation

Have any questions? Ask the Ultralytics forum, raise an issue, or submit a PR on the repo. You can also get started with our YOLOv5 segmentation Colab notebook for quickstart tutorials.

Let’s build the future
of AI together!

Begin your journey with the future of machine learning

Read more in this category