Crop multi-stream with different size using nvvidconv

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 6.0
• TensorRT Version: 8.2
• NVIDIA GPU Driver Version (valid for GPU only): 470.82
• Issue Type( questions, new requirements, bugs): question

I am working with multiple rtsp streams in Python, the inference is done in an specific area of the frame, to do that, I’m using the src-crop property of the nvvidconv plugin, which works perfectly to clip all streams to the same coordinates.
But what I need is to set a specific crop for each stream because the resolution and my region of interest is different for each camera. So I tried to bind the nvvidconv in the loop where I read the stream and configure and link each source_bin with the sinkpad, but I can’t connect the src with nvvidconv, I got this error: TypeError: sinkpad argument: Expected Gst.Pad, but I got gi.Gstnvvideoconvert.

How can I bind my nvvidconv to the specific crop before the streamux output, or just after the src_bin read?


nvvideoconvert does not supportsuch function. You may try nvdspreprocess plugin to set ROI for different streams. Gst-nvdspreprocess (Alpha) — DeepStream 6.0 Release documentation

Thank you so much @Fiona.Chen That was exactly what I needed. I just implemented it in python and it works perfect!!

I just have a question, in the configuration file (config_preprocess.txt) there are defined some dimensions of the network input
network-input-shape = 7,3,368,640
processing-width = 640
processing-height = 368

I already read the description of the documentation here (Gst-nvdspreprocess (Alpha) — DeepStream 6.0 Release documentation)) but I still have a doubt, the network-input-shape Should match the size of the network that I have configured in my primary gie, in this case YOLO with an input resolution of width=608, height=608?

When you use gst-nvdspreprocess plugin, the “input-tensor-meta=1” configuration should be configured in gst-nvinfer config file to skip the default pre-processing in gst-nvinfer. Then the tensor data from gst-nvdspreprocess will be input into gst-nvinfer throught correct path.

That setting helped me a lot. Without that line, nvdspreprocess was drawing the green box, but still had predictions outside the ROI, with input-tensor-meta=1 (setting by code in python) it solved, but I also had to rename the tensor from tensor-name=input_1 to tensor-name=data in config_preprocess.txt. (I write it in case it helps someone else)
Thanks for your help.

I still have a problem, that I’ve not been able to fix. For 1 stream it works perfect, it correctly recognizes objects within the ROI, but when I try to use it with multi-streaming I get the following error:

features = <Gst.CapsFeatures object at 0x7f0960be0340 (GstCapsFeatures at 0x7f06dc013da0)>
Custom Lib: Cuda Stream Synchronization failed
Cuda failure: status = 700 in cuResData at line 348
cuGraphicsMapResources failed with error (700) gst_eglglessink_cuda_buffer_copy
Cuda failure: status = 700 in cuResData at line 348
CustomTensorPreparation failed
free (): double free detected in tcache 2
Segment violation (generated core)

I was looking at C++ example /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-preprocess-test/deepstream_preprocess_test.cpp between lines 251 and 351 and there are an extra loop to iterate over the ROI to get the final frames

user_meta = (NvDsUserMeta *)(l_user_meta->data);
if (user_meta->base_meta.meta_type == NVDS_PREPROCESS_BATCH_META)
  GstNvDsPreProcessBatchMeta *preprocess_batchmeta =
      (GstNvDsPreProcessBatchMeta *)(user_meta->user_meta_data);
  guint roi_cnt = 0;
  for (auto &roi_meta : preprocess_batchmeta->roi_vector){ 
      NvDsMetaList *l_user = NULL;
      for (l_user = roi_meta.roi_user_meta_list; l_user != NULL;
           l_user = l_user->next)

but in python I get them as shown in the multi-stream example

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
    while l_obj is not None:

Is it necessary to make a change in this part to fix the error I mentioned or is it due to a different reason? I am somewhat confused with this, if you have an example in python using nvdspreprocess I would really appreciate being able to share it


Have you tried the C++ sample without changing anything?Can C++ deepstream-preprocess-test sample works in your platform?

Can you share the nvinfer config file you used for your YOLO model?

Yes. The width and height should be the same to your model dimension. “width=608, height=608” is correct.


Yes, I already tried the deepstream-preprocess-test (C++) example on my platform and it had worked fine.

But… what I hadn’t tried is to use my config_preprocess.txt and config_infer_primary_yoloV3.txt configuration files with your example ./deepstream-preprocess-test, and to my surprise in C++ threw me the same Python error.

So, I looked closely at the differences int config_preprocess.txt, at one point I changed the value of tensor-data-type to INT8 (option 3) instead of FP32 (option 0 by default) since in my PGIE I have configured the network with network-mode=1 (INT8) to gain speed, I thought that both should be the same. So when I changed the value in config_preprocess.txt file to default, tensor-data-type=0 (FP32) everything worked perfect.

Thank yo so much you all your help