NVMM not allowed in capsfilter in deepstream_test_1.py

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)
In deepstream_test_1.py, the GST graph is nvstreammux -> nvinfer. I modified that to nvstreammux -> nvvideoconvert -> capsfilter -> nvinfer. But NVMM constraint in capsfilter is causing failure.

nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
if not nvvidconv:
    sys.stderr.write(" Unable to create nvvidconv \n")
nvvidconv.set_property('src-crop', '448:273:243:172')

nvcaps0 = Gst.ElementFactory.make('capsfilter', 'caps0')
nvcaps0.set_property('caps', Gst.Caps.from_string('video/x-raw(memory:NVMM), width=224, height=224'))

I am doing a zone crop + caps. With the NVMM I get streaming stopped, reason not-negotiated (-4) error. When I remove the video/x-raw(memory:NVMM) portion the app works fine. The output of nvstreammux seems to be NVMM though.

With the NVMM removed, things work but there is a question. Output of nvvidconv is 243x172. But the capsfilter has 224x224 as the constraint. And capsfilter is supposed to not modify the image in any way. How does this constraint really work? Is this discrepancy the root cause for NVMM not being allowed?

It is OK to add ‘src-crop’ for nvvideoconvert and “NVMM” for capsfilter. The attached python script can work. deepstream_test_1.py.txt (10.7 KB)

You don’t need to remove “NVMM”. As to your pipeline, capsfilter is to define the output of nvvideoconvert, so nvvideoconvert will do the conversion.

Thanks for the reply.

What I want to do is to do zone cropping and resizing before nvinfer. This is because my NN needs to take only the cropped image as input. To solve this problem I added the nvvideoconvert (for the zone crop) and the capsfilter (which specifies the resizing) just before the nvinfer module. And I faced a problem with that pipeline configuration which I outlined in the first message of this thread.

I was under the assumption that as soon as streammux processes a batch across streams, the video data is copied over into NVMM buffers which allows the nvinfer and any subsequent modules to operate faster on the data since it is already in GPU-RAM.

It is good to know that capsfilter defines the conversion to be done by nvvideofconvert.

Please check the attached deepstream_test_1.py.txt (nvidia.com). I’ve added nvideoconvert with “src-crop” for you.

Thanks Fiona. This solved my problem.

Just for the record, I had a mistake in my code where I was linking the elements as streammux -> caps -> nvvideoconvert -> nvinfer. Instead I should have done streammux -> nvvideoconvert -> caps -> nvinfer.