Gst-nvdspreprocess set roi as input for pgie

I use Gst-nvdspreprocess set roi as input for detector ,however the objects out of the ROI still be detected

config

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

GPU RTX3090 Deepstream 6.1 TensorRT8.0.1 CUDA11.4

I refer to deepstream_preprocess_test.py build pipeline ,when i run the pipeline encountered an error:

ERROR: nvdsinfer_backend.cpp:302 Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 1x3x368x640 is not supported
ERROR: nvdsinfer_context_impl.cpp:1713 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:04.672214318 16826 0x7f64ca3ef400 WARN nvinfer gstnvinfer.cpp:1996:gst_nvinfer_process_tensor_input: error: Failed to queue input batch for inferencing

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

nvdspreproccess’s newtwok-input-shape should be same with model’s input shape.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.