• Hardware Platform (Jetson / GPU)
Jetson Xavier
• DeepStream Version
6.1.1
• JetPack Version (valid for Jetson only)
5.0.2
• TensorRT Version
8.4.1.5
• Issue Type( questions, new requirements, bugs)
Question
I have built a custom pipeline for use with a YOLO model. Some of it is included below:
... ! videoconvert ! video/x-raw,format=RGBA ! videoscale ! video/x-raw,width=640,height=640,pixel-aspect-ratio=(fraction)1/1 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA ! mux.sink_0 nvstreammux batch-size=1 name=mux live-source=True width=640 height=640 ! nvinfer config-file-path=infer.txt unique-id=1 ! nvvideoconvert ! video/x-raw,format=RGBA,width=640,height=640 ! customplugin name=custom width=640 height=640 ! video/x-raw,format=RGBA,width=728,height=544 ! videoconvert name=pull_output ! video/x-raw,format=RGBA,width=728,height=544 ! queue ! ...
With the above pipeline, it scales my rectangular (728 x 544) input to a letterboxed square (640 x 640), runs an inference using my YOLO model, then feeds this result into my “customplugin” which, among other things, resizes the images back to their original dimensions. Upon entering this customplugin, the dimensions should be 640x640, then for the rest of the pipeline the dimensions should be 728x544.
I am having trouble getting Deepstream to properly run this. With the current setup, the images get scaled back to 728x544 by the time they reach the code I wrote in my custom plugin. Switching the caps filter to 640x640 on the customplugin yields the following exception with the subsequent plugin:
gi.repository.GLib.Error: gst_parse_error: could not link pull_output to queue0, pull_output can’t handle caps videox-raw, format=(string)RGBA, width=(int)728, height=(int)544 (3)
I am not sure why the videoconvert
plugin would be unable to handle this caps filter.
Let me know what additional information is necessary to debug this. Thanks in advance.