DDRNet23-Slim: Optimized for Qualcomm Devices

DDRNet23Slim is a machine learning model that segments an image into semantic classes, specifically designed for road-based scenes. It is designed for the application of self-driving cars.

This is based on the implementation of DDRNet23-Slim found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
ONNX float Universal QAIRT 2.42, ONNX Runtime 1.24.3 Download
ONNX w8a8 Universal QAIRT 2.42, ONNX Runtime 1.24.3 Download
QNN_DLC float Universal QAIRT 2.45 Download
TFLITE float Universal QAIRT 2.45 Download
TFLITE w8a8 Universal QAIRT 2.45 Download

For more device-specific assets and performance metrics, visit DDRNet23-Slim on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for DDRNet23-Slim on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.semantic_segmentation

Model Stats:

  • Model checkpoint: DDRNet23s_imagenet.pth
  • Inference latency: RealTime
  • Input resolution: 2048x1024
  • Number of output classes: 19
  • Number of parameters: 6.13M
  • Model size (float): 21.7 MB
  • Model size (w8a8): 6.11 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
DDRNet23-Slim ONNX float Snapdragon® 8 Elite Gen 5 Mobile 10.47 ms 31 - 256 MB NPU
DDRNet23-Slim ONNX float Snapdragon® X2 Elite 10.888 ms 22 - 22 MB NPU
DDRNet23-Slim ONNX float Snapdragon® X Elite 28.119 ms 24 - 24 MB NPU
DDRNet23-Slim ONNX float Snapdragon® 8 Gen 3 Mobile 19.79 ms 32 - 308 MB NPU
DDRNet23-Slim ONNX float Qualcomm® QCS8550 (Proxy) 28.67 ms 24 - 28 MB NPU
DDRNet23-Slim ONNX float Qualcomm® QCS9075 39.311 ms 24 - 51 MB NPU
DDRNet23-Slim ONNX float Snapdragon® 8 Elite For Galaxy Mobile 13.629 ms 5 - 203 MB NPU
DDRNet23-Slim ONNX w8a8 Snapdragon® 8 Elite Gen 5 Mobile 44.509 ms 56 - 247 MB NPU
DDRNet23-Slim ONNX w8a8 Snapdragon® X2 Elite 43.943 ms 109 - 109 MB NPU
DDRNet23-Slim ONNX w8a8 Snapdragon® X Elite 87.005 ms 109 - 109 MB NPU
DDRNet23-Slim ONNX w8a8 Snapdragon® 8 Gen 3 Mobile 43.323 ms 92 - 339 MB NPU
DDRNet23-Slim ONNX w8a8 Qualcomm® QCS6490 299.285 ms 199 - 216 MB CPU
DDRNet23-Slim ONNX w8a8 Qualcomm® QCS8550 (Proxy) 57.833 ms 81 - 92 MB NPU
DDRNet23-Slim ONNX w8a8 Qualcomm® QCS9075 63.814 ms 87 - 89 MB NPU
DDRNet23-Slim ONNX w8a8 Qualcomm® QCM6690 266.633 ms 201 - 209 MB CPU
DDRNet23-Slim ONNX w8a8 Snapdragon® 8 Elite For Galaxy Mobile 42.032 ms 83 - 270 MB NPU
DDRNet23-Slim ONNX w8a8 Snapdragon® 7 Gen 4 Mobile 250.631 ms 139 - 148 MB CPU
DDRNet23-Slim QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 10.391 ms 12 - 249 MB NPU
DDRNet23-Slim QNN_DLC float Snapdragon® X2 Elite 11.73 ms 24 - 24 MB NPU
DDRNet23-Slim QNN_DLC float Snapdragon® X Elite 33.791 ms 24 - 24 MB NPU
DDRNet23-Slim QNN_DLC float Snapdragon® 8 Gen 3 Mobile 22.104 ms 24 - 298 MB NPU
DDRNet23-Slim QNN_DLC float Qualcomm® QCS8275 (Proxy) 98.382 ms 24 - 220 MB NPU
DDRNet23-Slim QNN_DLC float Qualcomm® QCS8550 (Proxy) 32.726 ms 24 - 26 MB NPU
DDRNet23-Slim QNN_DLC float Qualcomm® SA8775P 40.32 ms 24 - 221 MB NPU
DDRNet23-Slim QNN_DLC float Qualcomm® QCS9075 53.286 ms 24 - 52 MB NPU
DDRNet23-Slim QNN_DLC float Qualcomm® QCS8450 (Proxy) 66.995 ms 6 - 280 MB NPU
DDRNet23-Slim QNN_DLC float Qualcomm® SA7255P 98.382 ms 24 - 220 MB NPU
DDRNet23-Slim QNN_DLC float Qualcomm® SA8295P 43.449 ms 24 - 228 MB NPU
DDRNet23-Slim QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 15.617 ms 16 - 237 MB NPU
DDRNet23-Slim TFLITE float Snapdragon® 8 Elite Gen 5 Mobile 10.363 ms 2 - 241 MB NPU
DDRNet23-Slim TFLITE float Snapdragon® 8 Gen 3 Mobile 22.518 ms 2 - 285 MB NPU
DDRNet23-Slim TFLITE float Qualcomm® QCS8275 (Proxy) 98.289 ms 0 - 202 MB NPU
DDRNet23-Slim TFLITE float Qualcomm® QCS8550 (Proxy) 33.206 ms 2 - 4 MB NPU
DDRNet23-Slim TFLITE float Qualcomm® SA8775P 40.402 ms 2 - 205 MB NPU
DDRNet23-Slim TFLITE float Qualcomm® QCS9075 53.887 ms 0 - 41 MB NPU
DDRNet23-Slim TFLITE float Qualcomm® QCS8450 (Proxy) 67.023 ms 3 - 287 MB NPU
DDRNet23-Slim TFLITE float Qualcomm® SA7255P 98.289 ms 0 - 202 MB NPU
DDRNet23-Slim TFLITE float Qualcomm® SA8295P 43.484 ms 2 - 215 MB NPU
DDRNet23-Slim TFLITE float Snapdragon® 8 Elite For Galaxy Mobile 15.381 ms 2 - 225 MB NPU
DDRNet23-Slim TFLITE w8a8 Snapdragon® 8 Elite Gen 5 Mobile 21.521 ms 1 - 231 MB NPU
DDRNet23-Slim TFLITE w8a8 Snapdragon® 8 Gen 3 Mobile 37.132 ms 1 - 248 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® QCS6490 193.066 ms 10 - 78 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® QCS8275 (Proxy) 95.072 ms 1 - 200 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® QCS8550 (Proxy) 48.93 ms 1 - 3 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® SA8775P 49.656 ms 1 - 201 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® QCS9075 51.53 ms 0 - 15 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® QCM6690 212.826 ms 10 - 228 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® QCS8450 (Proxy) 56.003 ms 0 - 245 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® SA7255P 95.072 ms 1 - 200 MB NPU
DDRNet23-Slim TFLITE w8a8 Qualcomm® SA8295P 56.125 ms 1 - 204 MB NPU
DDRNet23-Slim TFLITE w8a8 Snapdragon® 8 Elite For Galaxy Mobile 68.889 ms 1 - 221 MB NPU
DDRNet23-Slim TFLITE w8a8 Snapdragon® 7 Gen 4 Mobile 63.166 ms 11 - 211 MB NPU

License

  • The license for the original implementation of DDRNet23-Slim can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/DDRNet23-Slim