Inference error with engine created in deepstream

Hello!

I’m trying to do inference on TensorRT using engine created in Deepstream. Engine deserializes without any errors, but when i run inference error occurred:
[TensorRT] ERROR: Parameter check failed at: engine.cpp::enqueueV2::546, condition: !mEngine.hasImplicitBatchDimension()

Engine was created and used in Deepstream and TensorRT on the same jetson device.
I’m using peoplenet model, it was retrained and pruned using TLT and working fine in deepstream.

Python code attached in .txt file.

System: Jetson AGX Xavier
Jetpack: 4.4 [L4T 32.4.3]
TensorRT: 7.1.3
Python: 3.6.9
CUDA: 10.2
cuDNN: 8.0.0.180
Deepstream: 5.0

inference.txt (2.5 KB)

Hi @EAKonov,
Request you to check the below link.

Alternatively, you can try running your model in trtexec mode with --verbose log.


Thanks!

Thank you for answer! But i have another question. Can i create engine from .etlt model in TensorRT without TLT or deepstream?

Hi @EAKonov,
I might not be very sure about this, but hope this link helps you

Thanks!