Converting ONNX model into tensorRT model

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.10.0
[1] DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[1] Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[1]DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
2.1.0
[1] other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
Hi team, I want to modify the sample dnn plugin as per my own mnist model which was trained in pytorch and then I want to convert it into TRT mode using tensorRT optimization tool.
But I came across following errors.
Kindly take a loot at my errors and help me accordingly!

Error String

Logs
Provide logs in text box instead of image

Please paste the complete application log here. If there are multiple logs, please use multiple text box

your model has dynamic shapes as inputs, export the onnx in a fixed shape and then use the tool.

@ashwin.nanda Thanks for helping the community. @akshay.tupkar , could you please try it if helps?

Thanks for posting this, I just had the same error.

a successful model conversion looks like this, check the input and output shapes from my onnx model (it is static; a negative sign would imply shapes are dynamic)


root@6.0.10.0-0009-build-linux-sdk:/usr/local/driveworks/tools/dnn# ./tensorRT_optimization --modelType=onnx  --onnxFile=./v8n-seg-boundary.onnx  --out=v8n_seg_dnn_loodable.bin 
[27-09-2025 16:34:03] DNNGenerator: Initializing TensorRT generation on model ./v8n-seg-boundary.onnx.
[27-09-2025 16:34:03] DNNGenerator: Input "images": 1x3x640x640
[27-09-2025 16:34:03] DNNGenerator: Output "output0": 1x37x8400
[27-09-2025 16:34:03] DNNGenerator: Output "output1": 1x32x160x160
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 0: 7.176960 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 1: 3.593888 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 2: 4.058112 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 3: 4.063232 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 4: 3.567616 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 5: 3.592832 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 6: 4.018176 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 7: 4.060384 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 8: 3.578848 ms.
[27-09-2025 16:35:23] GraphRecorder: CUDA graph disabled, skip recording
[27-09-2025 16:35:23] DNNValidator: Iteration 9: 3.585664 ms.
[27-09-2025 16:35:23] DNNValidator: Average over 10 runs is 4.129571 ms.
[27-09-2025 16:35:23] No validation files were provided. Validation skipped.
[27-09-2025 16:35:23] [27-09-2025 16:35:23] Releasing Driveworks SDK Context

Dear @ashwin.nanda
I had .pt file of mnist, then I converted it into ONNX. By using trtexec, I converted this onnx file to .engine file. Can I use this .engine file for inference?

yes you can but with Tensorrt APIs in c++/python not in driveworks-dnn unless you want to bridge some custom trt-plugins. to use dw dnn apis, you need to use the tensorrt_optimsation tool on the onnx file like i showed above.

Dear @ashwin.nanda Thank you so much for this quick reply.
I want to try this .engine file which I created from trtexec. Could you please suggest me a sample where I can load this .engine file and get inference?

dear @akshay.tupkar i see that you have a topic open where you already testing the sample. if this sample is from the host local tensorrt installation, then you can load the engine and test it. you might also need to look for processing before and after inference.

Dear @ashwin.nanda
Hope the generated bin file is valid

Dear @akshay.tupkar ,
Log looks good. Please give a try with DW DNN APIs to load and perform inference

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.