ultralytics platform
Your trained models from browser testing to production endpoints in a few clicks with auto-scaling, real-time monitoring, and 17+ export formats.

43+
Deployment regions
17+
Export formats
2.7B+
Daily usages





Dedicated endpoints scale up for traffic spikes and down to zero when idle.
Scale to zero by default. No cost when your endpoint isn't receiving requests.
No rate limits. Dedicated endpoints have no throughput caps.
Configurable resources. Choose CPU (1–8 cores) and memory (1–32 GB) to match your workload.
Ultralytics Platform supports cloud and edge deployment for high-performance. All Ultralytics YOLO models are natively optimized to run efficiently across environments, delivering high accuracy, reliable performance, and compatibility even on edge devices with limited compute resources.


Complete real-time visibility into your models' performance. Once your models are live, the deployments dashboard gives you a centralized overview of every running endpoint, with the metrics and toolkit you need to optimize and keep your frameworks running reliably.
Request volume. Total requests across all endpoints over the last 24 hours.
P95 latency. 95th percentile response time to track real-world use case performance.
Error rates. Clear alerts when error rates exceed 5%, with severity-filtered logs to diagnose issues fast.
Health checks. Live endpoint monitoring with auto-retry. Latency displayed per check.
Every deployed endpoint comes with auto-generated code examples in Python, JavaScript, and cURL, pre-populated with your actual endpoint URL and API key. Copy, paste, and start sending inference requests from any application.

1
Annotate
2
Train
3
Deploy
Yes. Each model can be deployed to multiple regions simultaneously. Your plan determines the total number of endpoints available, 3 for Free, 10 for Pro, and unlimited for Enterprise. This allows you to serve users globally with low-latency endpoints in each region.
Dedicated endpoints are billed based on CPU, memory, and request volume. With scale-to-zero enabled by default, you only pay for active inference time, there's no cost when your endpoint isn't receiving requests. Shared inference is included with your platform plan.
Shared inference runs on a multi-tenant service across 3 regions and is rate-limited to 20 requests per minute. It's best for development and quick testing. Dedicated endpoints are single-tenant services deployed to any of 43 regions with no rate limits, consistent latency, and configurable resources, built for scalable production workloads.
Dedicated endpoint deployment typically takes one to two minutes. This includes container provisioning, startup, and an initial health check to validate the service is ready. Once the endpoint is ready, it begins accepting inference requests immediately.
Model deployment is the process of making a trained computer vision model available to receive and process real-world data. Once deployed, computer vision applications can send images and video frames to the model via API and receive predictions, enabling everything from automated quality inspection to real-time object detection in production systems. On Ultralytics Platform, deployment is integrated directly into the end-to-end training workflow. Once your model is trained, you can test it in the browser, deploy it to a dedicated endpoint in any of 43 global regions, and monitor its performance, all from the same workspace.
Take your trained models to production across 43 global regions with auto-scaling and real-time monitoring.