Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson Nano
• DeepStream Version
5.0.1
• JetPack Version (valid for Jetson only)
JetPack 4.4
• TensorRT Version
7.1.3
I created tensorrt engine from onnx file in deepstream, and following info printed:
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input.1 3x320x320 min: 1x3x320x320 opt: 4x3x320x320 Max: 4x3x320x320
1 OUTPUT kFLOAT 869 8400x5 min: 0 opt: 0 Max: 0
Therefore I think the dynamic input has been set in deepstream successfully, however when I started my deepstream app, it failed with this error:
ERROR: [TRT]: Reshape_13: reshaping failed for tensor: 441
ERROR: [TRT]: shapeMachine.cpp (160) - Shape Error in executeReshape: reshape would change volume
ERROR: [TRT]: Instruction: RESHAPE_ZERO_IS_PLACEHOLDER{2 116 40 40} {4 2 58 40 40}
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:04:33.074389733 4391 0x5569176c50 WARN nvinfer gstnvinfer.cpp:1251:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1251): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
WIthin this deepstream app config, I set batchsize=4 and num-sources=4(cam mp4). My suppose is that because of infer delay there are only 3 cam sources came into this batch at that moment, so it failed to deal with this case.
Suggestions and Help needed. Thanks!