Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU (2080 ti)
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 450.102.04
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
batch-size=9 in [streamux] group in config_file.txt with 9 rtsp sources.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I have few observations to share for my custom Retinaface model with deepstream5 and 9 rtsp video sources.
- I used trtexec for onnx to .engine file conversion with “–explicitBatch” parameter as directly loading onnx to deepstream was giving me error for “EXPLICIT_BATCH (!_importer_ctx.network()->hasImplicitBatchDimension())”.
I tested with “batch-size=5 in [streamux] group and batch-size=1 in [pgie] group as onnx model was created with batch-size=1”…
The model ran fine and all cameras were set to 20 fps and performance of deepstream was close to 18fps… .
Is it fine to put batch-size as mentioned?
-
I found that “batch-size” parameter in [streamux] group are needed to be set according to number of sources. So i set that to 9 and then deepstream hangs when i start and performance gives me 0 value for all the cameras.
Also if i set “batch-size=1”, then it runs but with 6fps of speed.
Not sure why. -
I converted onnx model with batch-size=9 and did trtexec again to build the engine file like-
“trtexec --batch=9 --onnx=onnx-model --saveEngine=output.engine”
And then tried with "batch-size=9 " in both [pgie] and [streamux] group but this time there was error-
WARNING: nvdsinfer_backend.cpp:162 Backend context bufferIdx(0) request dims:1x3x640x640 is out of range, [min: 10x3x640x640, max: 10x3x640x640]
**ERROR: nvdsinfer_backend.cpp:425 Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 1x3x640x640 is not supported **
ERROR: nvdsinfer_context_impl.cpp:1532 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:05.307619290 18160 0x559e34047f70 WARN nvinfer gstnvinfer.cpp:1216:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Any suggestions on all above observations? or i can go with batch-size=5 but this feels like a decent escape rather than solving.
I have seen many posts regarding batch-size but not cleared for my case.
Thanks.