In latest deepstream, you can set the onnx file into the spec file.
onnx-file=xxx.onnx
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Classification TF_2 QAT - Calibration files? | 9 | 400 | June 4, 2023 | |
| ONNX Output in TAO 5.0 - how to get an .etlt-model in TAO 5.0.0 | 2 | 1197 | October 6, 2023 | |
| Export model en tao talking | 3 | 35 | June 12, 2025 | |
| How to deploy TAO 5.1.0 trained models to Jetson - Xavier NX and Nvidia Jetson TX2 NX (with Xavier NX Devkit) | 2 | 547 | December 20, 2023 | |
| Converting etlt file to .engine for jetson | 17 | 3017 | October 25, 2022 | |
| Deepstream Onnx inference no output | 29 | 84 | August 15, 2024 | |
| Converting efficientdet-tf1 trained model to deepstream engine | 4 | 273 | October 30, 2023 | |
| TAO exported model used in deepstream failed | 3 | 42 | August 2, 2024 | |
| Possibility of QAT training for Jetson devices for yolov4_tiny model with pruned etlt model | 2 | 428 | May 16, 2023 | |
| Converting .trt model to .etlt | 6 | 291 | November 15, 2023 |