Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only) 6.1
• TensorRT Version 10.3.0.26-1
• Issue Type( questions, new requirements, bugs) Question
I’m using pyds to write a Deepstream pipeline that achieves the following
- Accepts video input from v4l2src (/dev/video4 in this case - generic webcam)
- Performs inference
- Annotates frames
- Sinks annotated frames to v4l2sink
This is running within a container on a Jetson (based on nvcr.io/nvidia/deepstream:7.1-triton-multiarch
) but that hasn’t caused any issues as far as I’m aware. Network and device access is correctly configured, and all the deepstream python samples are running just fine.
Pipeline Description
v4l2source
caps_v4l2src
optional
vidconvsrc
: recommended to permit more video formats through
nvvidconvsrc
caps_nvvidconvsrc
: sets NVMM & NV12 format for pgie
streammux
pgie
nvvidconvosd
caps_nvvidconvsrc2
: converts to RGBA for nvosd
nvosd
identity
drops memory allocation so buffer will be recreated with system memory (recommended here)
v4l2sink
: sinks to v4l2 device
Problem
I am able to get the pipeline working just fine with nv3d-sink. I am also able to get a different pipeline working with v4l2sink to a v4l2loopback virtual device, as long as I don’t have any deepstream elements in the pipeline (only Gstreamer), so I know the issue isn’t with my v4l2loopback config. However, when I combine deepstream elements with v4l2-source, the following error occurs:
gst_pad_link_prepare: trying to link identity:src and v4l2-sink:sink
gst_pad_link_prepare: caps are incompatible
gst_pad_link_full: link between identity:src and v4l2-sink:sink failed: no common format
Since identity supports “ANY” caps, I assume its an upstream element causing the error. Given that this is only occurring after the introduction of deepstream elements, I suspected it might be a memory issue (NVMM), so I followed these instructions and added the identity component to drop the memory allocation from the buffer metadata before it gets sent into v4l2sink. This helped in one pipeline but I havent been able to reproduce it with pgie & streammux included.
With memory compatibility theoretically solved, I took a look at the pixel format to see if that was the problem.
I set up probes at each element’s src pad, but the format seems to align with the supported caps of each element. pgie
needs NV12, nvosd needs RGBA
, identity is fine with anything.
source: YUY2
caps_v4l2src: YUY2
vidconvsrc: YUY2
nvvidconvsrc: NV12
caps_nvvidconvsrc: NV12
pgie: NV12
nvvidconvosd: RGBA
caps_nvvidconvsrc2: RGBA
streammux: NV12
nvosd: RGBA
identity: RGBA
#(v4l2-sink missing because it failed)
Any idea what may be causing the pipeline to fail, or which steps I can take to troubleshoot further?
Code
# Standard GStreamer initialization
Gst.init(None)
logger.info("Creating Pipeline \n ")
pipeline = Gst.Pipeline()
# Source element for reading from the file
logger.info("Creating Source \n ")
source = Gst.ElementFactory.make("v4l2src", "source")
source.set_property('device', settings.device)
caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=30/1"))
logger.info("Creating Video Converter \n")
# videoconvert to make sure a superset of raw formats are supported
vidconvsrc = Gst.ElementFactory.make("videoconvert", "vidconvsrc")
# nvvideoconvert to convert incoming raw buffers to NVMM Mem (NvBufSurface API)
nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "nvvidconvsrc")
caps_nvvidconvsrc = Gst.ElementFactory.make("capsfilter", "caps_nvvidconvsrc")
caps_nvvidconvsrc.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM), format=NV12"))
# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "streammux")
streammux.set_property('width', settings.save_res_width)
streammux.set_property('height', settings.save_res_height)
streammux.set_property('batch-size', 1)
streammux.set_property('batched-push-timeout', 4000000)
# Use nvinfer to run inferencing on camera's output,
# behaviour of inferencing is set through config file
pgie = Gst.ElementFactory.make("nvinfer", "pgie")
pgie.set_property('config-file-path', config_abspath)
# Use convertor to convert from NV12 to RGBA as required by nvosd
nvvidconvosd = Gst.ElementFactory.make("nvvideoconvert", "nvvidconvosd")
caps_nvvidconvsrc2 = Gst.ElementFactory.make("capsfilter", "caps_nvvidconvsrc2")
caps_nvvidconvsrc2.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA"))
# Create OSD to draw on the converted RGBA buffer
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
identity = Gst.ElementFactory.make("identity", "identity")
identity.set_property("drop-allocation", 1)
# Sink to virtual device
sink = Gst.ElementFactory.make("v4l2sink", "v4l2-sink")
sink.set_property("device", "/dev/video7")
# Set sync = false to avoid late frame drops at the display-sink
sink.set_property('sync', False)
logger.info("Adding elements to Pipeline \n")
pipeline.add(source)
pipeline.add(caps_v4l2src)
pipeline.add(vidconvsrc)
pipeline.add(nvvidconvsrc)
pipeline.add(caps_nvvidconvsrc)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconvosd)
pipeline.add(caps_nvvidconvsrc2)
pipeline.add(nvosd)
pipeline.add(identity)
pipeline.add(sink)
logger.info(f"Playing cam {settings.device} \n")
# Printing pixel format of each element
source.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "source")
caps_v4l2src.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "caps_v4l2src")
vidconvsrc.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "vidconvsrc")
nvvidconvsrc.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "nvvidconvsrc")
caps_nvvidconvsrc.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "caps_nvvidconvsrc")
streammux.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "streammux")
pgie.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "pgie")
nvvidconvosd.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "nvvidconvosd")
caps_nvvidconvsrc2.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "caps_nvvidconvsrc2")
nvosd.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "nvosd")
identity.get_static_pad("src").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "identity")
sink.get_static_pad("sink").add_probe(Gst.PadProbeType.BUFFER, buffer_probe, "sink")
# Linking
logger.info("Linking elements in the Pipeline \n")
source.link(caps_v4l2src)
caps_v4l2src.link(vidconvsrc)
vidconvsrc.link(nvvidconvsrc)
nvvidconvsrc.link(caps_nvvidconvsrc)
sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
sys.stderr.write(" Unable to get the sink pad of streammux \n")
srcpad = caps_nvvidconvsrc.get_static_pad("src")
if not srcpad:
sys.stderr.write(" Unable to get source pad of caps_nvvidconvsrc \n")
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(nvvidconvosd)
nvvidconvosd.link(caps_nvvidconvsrc2)
caps_nvvidconvsrc2.link(nvosd)
nvosd.link(identity)
identity.link(sink)
# create an event loop and feed gstreamer bus mesages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect ("message", bus_call, loop)