• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only) N/A
• Issue Type( questions, new requirements, bugs) Question/Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) See below for details
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) N/A
I have an action recognition model performing inference on ROIs. It works great on pre-configured ROIs. In my environment they change location infrequently. This is a problem for me because I have to restart the app with a different set of ROIs configured in order to make it work. I want to make the app more robust, so I have been looking to make the ROIs dynamic.
To achieve this goal I’ve added a detector running at a high interval, 18k frames (10 minutes or so at 30FPS) and a tracker to keep the boxes alive when the detector is not running. It works great for a few intervals, but unfortunately when the detector runs, it can sometimes make the nvdspreprocess
element fail with a generic CUDA error.
/.../sequence_image_process.cpp:312, [ERROR: CUSTOM_LIB] Failed to copy ready sequence to batched buffer, cuda err_no: 1, err_str: cudaErrorInvalidValue
/.../sequence_image_process.cpp:654, [ERROR: CUSTOM_LIB] collect ready buffers failed, seq_process error: 7
I suspect this is due to the sequence_image_process
library does not handle ROIs changing over time. Is this a known issue or something that can be addressed?
My simplified pipeline looks like this:
N sources > nvstreammux
> nvinfer
> nvtracker
> nvdspreprocess
> nvinfer
> sink
Relevant configurations for the elements above:
- primary detector
[property]
interval=749
gie-unique-id=2
process-mode=1
network-type=0
# (...)
- tracker is NvDCF
- preprocess configuration
[property]
enable=1
process-on-frame=0
target-unique-ids=5
processing-width=224
processing-height=224
network-input-shape=32;3;16;224;224
# (...)
[user-configs]
subsample=19
stride=8
# (...)
- action recognition configuration
[property]
gie-unique-id=5
process-mode=2
input-tensor-from-meta=1
network-type=100
# (...)
Unfortunately I cannot provide the full source code or input data, but from my testing it is fairly consistent to get it to fail after around two detection intervals (in the configuration above, roughly around 1500 frames)