YOLO26s
Model Description
YOLO26s is the small-sized variant of the YOLO26 model family, offering a strong balance between inference speed and detection accuracy.
Positioned above the nano model, YOLO26s provides improved representational capacity while remaining efficient enough for real-time applications on edge, desktop, and entry-level server hardware. It uses an end-to-end, NMS-free architecture, simplifying deployment and reducing inference overhead.
YOLO26s is pretrained on the COCO dataset and is well-suited for production object detection pipelines that require both responsiveness and reliability.
Quickstart
- Install NexaSDK and create a free account at sdk.nexa.ai
- Activate your device with your access token:
nexa config set license '<access_token>' - Run the model on Qualcomm NPU in one line:
nexa infer NexaAI/yolo26s-npu
Features
- Small model size with balanced speed and accuracy
- End-to-end detection with NMS-free inference
- Real-time capable across a wide range of hardware
- Pretrained weights available for immediate use
- Ultralytics ecosystem support for training, validation, inference, and export
Use Cases
- Real-time video analytics
- Edge and desktop object detection
- Smart retail and surveillance systems
- Robotics and automation
- Production-ready computer vision applications
Inputs and Outputs
Input:
- Images or video streams, automatically preprocessed by the Ultralytics framework
Output:
- Bounding boxes
- Class labels
- Confidence scores
License
This repo is licensed under the Creative Commons Attribution–NonCommercial 4.0 (CC BY-NC 4.0) license, which allows use, sharing, and modification only for non-commercial purposes with proper attribution. All NPU-related models, runtimes, and code in this project are protected under this non-commercial license and cannot be used in any commercial or revenue-generating applications. Commercial licensing or enterprise usage requires a separate agreement. For inquiries, please contact [email protected]
- Downloads last month
- 7