ONNX to TRT (with DLA) Conversion Error

Got the error below when I tried to use trtexec (with TensorRT 8.0.1) on a Xavier board
(Used this command /usr/src/tensorrt/bin/trtexec --onnx=G1.onnx --fp16 --useDLACore=0 --saveEngine=G1xxx.trt --allowGPUFallback)

Module_id 33 Severity 2 : NVMEDIA_DLA 684
Module_id 33 Severity 2 : Failed to bind input tensor. err : 0x00000b
Module_id 33 Severity 2 : NVMEDIA_DLA 2866
Module_id 33 Severity 2 : Failed to bind input tensor args. status: 0x000007

Is this a known issue with this version of TRT? (as I have seen this error being posted about on other forum posts too).

Edit : I can’t share the whole model, but it was a modified Yolov5 model.

Hi,

Could you add a --verbose tag and share the complete log with us?

More, we have some newer TensorRT releases.
It’s recommended to upgrade the TensorRT to the latest first. Ex. TensorRT 8.5 from JetPack 5.1.

Thanks.

I’ve attached the log within.

We cannot switch to a newer TRT version at the moment due to other system integration considerations.
TRTLog.txt (161.5 KB)

Hi,

This is a known issue and is fixed in the newer TensorRT software.
Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

@karmakarabhigyan
Also check out the DLA github page for samples and resources: Recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.

We have a FAQ page that addresses some common questions that we see developers run into: Deep-Learning-Accelerator-SW/FAQ