Failed to bind input tensor. err and Failed to bind input tensor args. status

Hi Dears,

I tried to build the TensorRT engine based on ONNX model by using trtexec. The ONNX models are ByteTrack tracking models, with different base networks, such as yolox_x and yolox_s. But when I built the ByteTrack with yolox_s (linked down below) by the command of trtexec --onnx=bytetrack_s.onnx --useDLACore=0 --fp16 --allowGPUFallback, I came across the following error.

Module_id 33 Severity 2 : NVMEDIA_DLA 684
Module_id 33 Severity 2 : Failed to bind input tensor. err : 0x00000b
Module_id 33 Severity 2 : NVMEDIA_DLA 2866
Module_id 33 Severity 2 : Failed to bind input tensor args. status:  0x000007
[02/08/2022-12:25:03] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 942, GPU 6818 (MiB)
[02/08/2022-12:25:03] [E] Error[1]: [nvdlaUtils.cpp::submit::198] Error Code 1: DLA (Failure to submit program to DLA engine.)
[02/08/2022-12:25:03] [E] Error[2]: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)
[02/08/2022-12:25:03] [E] Engine could not be created from network
[02/08/2022-12:25:03] [E] Building engine failed
[02/08/2022-12:25:03] [E] Failed to create engine from model.
[02/08/2022-12:25:03] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8203] # /storage/rekor_home/wma_projects/trtexec/trtexec --onnx=bytetrack_s.onnx --useDLACore=0 --fp16 --allowGPUFallback

https://drive.google.com/file/d/1tMnnG1nS3MYNm5AGq3SRxV7zcgXKBZv3/view?usp=sharing

This error does not happen to Bytetrack model with yolox_x. This error seems not clear to get guide to solution. Could you guys supply some help or advice?

Thank you!

Hi,

Confirmed that we can also reproduce this issue in our environment.
Let us check this with our internal team and share more information with you later.

Thanks.

Thank you for the reply. Below are the corresponding source model of yolox family. Where I converted the pre-trained yolox_s based models into onnx and tried to build the TensorRT engine.

This could be found in the ByteTrack github

Hi,

We have confirmed that this issue is fixed in our internal branch.

It will be tested and included in our future releases.
Currently, please use GPU mode as the temporal solution.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.