I’m trying to do inference on TensorRT using engine created in Deepstream. Engine deserializes without any errors, but when i run inference error occurred:
[TensorRT] ERROR: Parameter check failed at: engine.cpp::enqueueV2::546, condition: !mEngine.hasImplicitBatchDimension()
Engine was created and used in Deepstream and TensorRT on the same jetson device.
I’m using peoplenet model, it was retrained and pruned using TLT and working fine in deepstream.
Python code attached in .txt file.
System: Jetson AGX Xavier
Jetpack: 4.4 [L4T 32.4.3]
inference.txt (2.5 KB)