Deepstream Pipeline with NVOF and NVOFVISUAL Elements on Jetson Orin Nano

Hello there!

I am trying to get a deepstream pipeline for optical flow calculation running. The idea is to receive a video stream from a USB camera, do the optical flow calculation and send the resulting image from nvofvisual via RTP stream. The important parts of the Python code look like this:

        # Standard GStreamer initialization
        Gst.init(None)
        # Create GStreamer pipeline
        self.pipeline = Gst.Pipeline()

        usb_cam_source = Gst.ElementFactory.make("v4l2src", "usb-cam-source")

        caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src_caps")
        caps_v4l2src.set_property('caps', Gst.Caps.from_string("video/x-raw, framerate=30/1"))
        
        # Video Converter
        vidconvsrc = Gst.ElementFactory.make("videoconvert", "convertor_src1")
        nvvidconvsrc = Gst.ElementFactory.make("nvvideoconvert", "convertor_src2")

        # Set the caps for nvvidconvsrc
        caps_vidconvsrc = Gst.ElementFactory.make("capsfilter", "nvmm_caps")
        caps_vidconvsrc.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=(string)NV12")) 
        
        caps_vidconvsink = Gst.ElementFactory.make("capsfilter", "nvmm_caps2")
        caps_vidconvsink.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=(string)NV12")) 

        # Create nvstreammux instance to form batches from one or more sources.
        streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
        streammux.set_property('width', 1216)
        streammux.set_property('height', 684)
        streammux.set_property('batch-size', 1)
        streammux.set_property('batched-push-timeout', 4000000)
                
        nvof = Gst.ElementFactory.make("nvof", "nvopticalflow")
        nvof.set_property('preset-level', 0)
        nvofvisual = Gst.ElementFactory.make("nvofvisual", "nvopticalflowvisual")

        # RTP Stream Out
        # Video Converter
        nvvidconv2 = Gst.ElementFactory.make("nvvideoconvert", "convertor2")
        
        # Queue
        encoder_queue = Gst.ElementFactory.make("queue", "encoder_queue")
        encoder_queue.set_property("leaky", 2) # 2 corresponds to 'downstream' leaky queue behavior
        encoder_queue.set_property('max-size-buffers', 0)
        encoder_queue.set_property('max-size-bytes', 0)
        encoder_queue.set_property('max-size-time', 0)
        # x264 Encoder
        x264enc = Gst.ElementFactory.make('x264enc', 'encoder')
        x264enc.set_property("bitrate", 3000)
        x264enc.set_property("tune", "zerolatency")
        # Create capsfilter element
        caps_x264enc = Gst.ElementFactory.make("capsfilter", "caps_x264enc")
        caps = Gst.Caps.from_string("video/x-h264,profile=main")
        caps_x264enc.set_property("caps", caps)
        #H264 Parser
        h264parse = Gst.ElementFactory.make("h264parse", "parser")
        # RTP Payloader
        rtppay = Gst.ElementFactory.make('rtph264pay', 'rtppay')
        # UDP Sink
        udpsink = Gst.ElementFactory.make('udpsink', 'udpsink')
        udpsink.set_property('host', 'IP_ADDRESS')  # IP address of the destination machine
        udpsink.set_property('port', output_port)  # Any port number as per your choice
        udpsink.set_property("sync", False)
        udpsink.set_property("async", False)
        
        # Add elements to the pipeline
        self.get_logger().info('Adding elements to Pipeline')
        
        #USB camera source
        self.pipeline.add(usb_cam_source)
        self.pipeline.add(caps_v4l2src)
        
        self.pipeline.add(vidconvsrc)
        self.pipeline.add(nvvidconvsrc)
        self.pipeline.add(caps_vidconvsrc)
        self.pipeline.add(streammux)
        
        self.pipeline.add(nvof)
        self.pipeline.add(nvofvisual)

        self.pipeline.add(nvvidconv2)
        self.pipeline.add(encoder_queue)
        self.pipeline.add(x264enc)
        self.pipeline.add(caps_x264enc)
        self.pipeline.add(h264parse)
        self.pipeline.add(rtppay)
        self.pipeline.add(udpsink)

        # Link the elements together
        #USB camera source
        usb_cam_source.link(caps_v4l2src)
        caps_v4l2src.link(vidconvsrc)
        
        vidconvsrc.link(nvvidconvsrc)
        nvvidconvsrc.link(caps_vidconvsrc)
        
        sinkpad = streammux.get_request_pad("sink_0")
        srcpad = caps_vidconvsrc.get_static_pad("src")
        
        srcpad.link(sinkpad)
        
        streammux.link(nvof)
        nvof.link(nvofvisual)
        nvofvisual.link(nvvidconv2)
        nvvidconv2.link(encoder_queue)

        encoder_queue.link(x264enc)
        x264enc.link(caps_x264enc)
        caps_x264enc.link(h264parse)
        h264parse.link(rtppay)
        rtppay.link(udpsink)

        # Create an event loop and feed gstreamer bus messages to it
        self.loop = GLib.MainLoop()
        bus = self.pipeline.get_bus()
        bus.add_signal_watch()
        bus.connect("message", bus_call, self.loop)

When I run the pipeline, I get the following error:

Error: gst-stream-error-quark: Can not initialize x264 encoder. (8): gstx264enc.c(1898): gst_x264_enc_init_encoder (): /GstPipeline:pipeline0/GstX264Enc:encoder

When I run the pipeline without the nvofvisual element, i.e. doing

        streammux.link(nvof)
        nvof.link(nvvidconv2)
        nvvidconv2.link(encoder_queue)

I do receive the camera video via the RTP stream, but (obviously) without the visualization of the optical flow.

Furthermore, using the nvinfer, nvtracker and nvosd elements in the same pipeline (instead of nvof and nvofvisual), like this:

        streammux.link(pgie)
        pgie.link(tracker)
        tracker.link(nvvidconv)
        nvvidconv.link(nvosd)
        nvosd.link(nvvidconv2)
        nvvidconv2.link(encoder_queue)

works as expected and I do receive the video with bounding boxes via the RTP stream.

I do not understand, why I get this error. From the deepstream documentation, it seems that the output types of nvosd and nvofvisual (RGBA output buffer) are the same and I should be able to exchange the pgie related and optical flow related elements in the pipeline.

I am happy to receive any input on what I could try to get this pipeline working!

Here are some specs of the system I use:
• Hardware Platform (Jetson / GPU): Jetson Orin Nano, 8GB
• DeepStream Version: 6.3
• JetPack Version (valid for Jetson only): 5.1.2
• TensorRT Version: 8.5.2.2

Hi,

Can you try with the NVIDIA encoder? nvv4l2h264enc

You can also drop this converter: nvvidconv2 = Gst.ElementFactory.make("nvvideoconvert", "convertor2") if you use the NVIDIA encoder

Hi @miguel.taylor,
thanks for your quick reply!

  1. Is the nvv4l2h264enc a hardware encoder? As far as I know, the Jetson Orin Nano does not support hardware encoding.

  2. I removed nvvidconv2. I get a new error message (I have indeed seen earlier while playing around with it, as well) but I have changed logging mode and the log messages before the exception might be interesting? It seems to me that some frames got processed but then the pipeline stops because of the lost frames?

0:00:00.436149314 159894     0x1bd0cea0 DEBUG                v4l2src gstv4l2src.c:554:gst_v4l2src_negotiate:<usb-cam-source> fixated to: video/x-raw, framerate=(fraction)30/1, width=(int)640, height=(int)480, format=(string)YUY2, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
gst_ds_optical_flow_set_caps: Creating OpticalFlow Context for Source = 0
0:00:01.235833678 159894     0x1bd0cea0 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source> ts: 5:52:54.514739000 now 5:52:54.706864512 delay 0:00:00.192125512
0:00:01.235876495 159894     0x1bd0cea0 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source> sync to 0:00:00.033333333 out ts 0:00:00.640401493
0:00:01.753197992 159894     0x1bd0cea0 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source> ts: 5:52:54.706748000 now 5:52:55.224229913 delay 0:00:00.517481913
0:00:01.753241577 159894     0x1bd0cea0 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source> sync to 0:00:00.066666666 out ts 0:00:00.832410525
0:00:01.758124756 159894     0x1bd0cea0 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source> ts: 5:52:54.738759000 now 5:52:55.229167973 delay 0:00:00.490408973
0:00:01.758158452 159894     0x1bd0cea0 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source> sync to 0:00:00.099999999 out ts 0:00:00.864422229
0:00:01.761880006 159894     0x1bd0cea0 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source> ts: 5:52:54.770761000 now 5:52:55.232927832 delay 0:00:00.462166832
0:00:01.761905831 159894     0x1bd0cea0 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source> sync to 0:00:00.133333332 out ts 0:00:00.896424741
0:00:01.795777614 159894     0x1bd0cea0 DEBUG                v4l2src gstv4l2src.c:923:gst_v4l2src_create:<usb-cam-source> ts: 5:52:55.226759000 now 5:52:55.266814911 delay 0:00:00.040055911
0:00:01.795818159 159894     0x1bd0cea0 INFO                 v4l2src gstv4l2src.c:960:gst_v4l2src_create:<usb-cam-source> sync to 0:00:00.166666665 out ts 0:00:01.352421717
0:00:01.795835855 159894     0x1bd0cea0 WARN                 v4l2src gstv4l2src.c:978:gst_v4l2src_create:<usb-cam-source> lost frames detected: count = 11 - ts: 0:00:01.352421717
nvstreammux: Successfully handled EOS for source_id=0
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:usb-cam-source:
streaming stopped, reason not-linked (-1)

Sorry, I was thinking of the Jetson Nano, Not the Jetson Orin Nano. You are right, the Orin Nano doesn’t support any hardware encoder.

Can you try the pipelines on this wiki to see if you get the same error?

Another option is trying with avenc_h264_omx

I tried the avenc_h264_omx encoder but I am seeing the same error as before:

Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:usb-cam-source:
streaming stopped, reason not-linked (-1)

Could you please explain to me what I should look out for when I test the pipelines from the wiki? To me, it seems to be an issue specifically with this pipeline. As I mentioned, I can run a very similar pipeline by exchanging the optical flow elements with the inference elements successfully, for example.

Can you add capsfilter before x264enc?

Definitely! Do I need to set some specific properties for this caps element?

I tried these caps

caps_ofvis = Gst.ElementFactory.make("capsfilter", "caps_ofvis")
caps_ofvis.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=(string)NV12")) 

in this pipeline

usb_cam_source.link(caps_v4l2src)
caps_v4l2src.link(vidconvsrc)
  
vidconvsrc.link(nvvidconvsrc)
nvvidconvsrc.link(caps_vidconvsrc)
  
sinkpad = streammux.get_request_pad("sink_0")
srcpad = caps_vidconvsrc.get_static_pad("src")
  
srcpad.link(sinkpad)
  
streammux.link(nvof)
nvof.link(nvofvisual)
nvofvisual.link(encoder_queue)
  
encoder_queue.link(caps_ofvis)
caps_ofvis.link(x264enc)
x264enc.link(caps_x264enc)
caps_x264enc.link(h264parse)
h264parse.link(rtppay)
rtppay.link(udpsink)

But the error I get is still the same.

@spin

You can use “gst-inspect-1.0 x264enc” to get the information of which caps the element needs.

Based on the information of gst-inspect-1.0 x264enc, these caps should be okay:

caps_ofvis.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=(string)NV12")) 

Are you sure there is “video/x-raw(memory:NVMM)” in x264enc (gstreamer.freedesktop.org)?

x264enc is not DeepStream plugin.

Removing the memory:NVMM part of your caps will probably solve your issue. You can even remove the format, to let the caps negotiation find the best format:

... ! nvvideoconvert ! "video/x-raw" ! x264enc ! ...

I’ve tested the following pipeline works on Jetson

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4  ! qtdemux name=dux dux.video_0 ! h265parse ! nvv4l2decoder !  mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvof ! queue ! nvofvisual ! queue ! nvstreamdemux name=demux demux.src_0 ! queue ! nvvideoconvert ! x264enc ! mu.video_0 qtmux name=mu ! filesink location=new0.mp4 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! mux.sink_1 demux.src_1 ! queue ! nvvideoconvert ! x264enc ! mu2.video_0 qtmux name=mu2 ! filesink location=new1.mp4

@miguel.taylor
I tested:
caps_ofvis.set_property("caps", Gst.Caps.from_string("video/x-raw"))
and
#caps_ofvis.set_property("caps", Gst.Caps.from_string("video/x-raw"))
Unfortunately, I still get

Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:usb-cam-source:
streaming stopped, reason not-linked (-1)

@Fiona.Chen
The pipeline you provided works on my device, as well. However, I would like to send the output as an RTP stream.

Actually, I modified the pipeline @Fiona.Chen provided and was able to send the result as an RTP stream:

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4  ! qtdemux name=dux dux.video_0 ! h265parse ! nvv4l2decoder !  mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvof ! queue ! nvofvisual ! queue ! nvstreamdemux name=demux demux.src_0 ! queue ! nvvideoconvert ! x264enc ! rtph264pay config-interval=10 pt=96 ! udpsink host=<host_ip> port=<port>

However, I am still not able to run the pipeline when I replaced the filesrc with a v4l2src to use a USB webcam as source.

@spin

I’ve tried with our USB camera, the following pipeline works well.

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,format=YUY2,width=1280,height=720,framerate=10/1' ! nvvideoconvert ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 live-source=1 batched-push-timeout=100000 ! nvof ! queue ! nvofvisual ! queue ! nvstreamdemux name=demux demux.src_0 ! queue ! nvvideoconvert ! x264enc ! h264parse ! rtph264pay ! udpsink

Hi @Fiona.Chen, the pipeline you provided finally made it work for me, as well. Thanks a lot!

My only complaint would be that, with a stationary camera (non moving scene), the optical flow analysis is very “noisy”. However, it was similar with the sample_1080p_h265.mp4 video file, hence, I don’t think it is purely an issue of the camera:

Do you have any ideas on how to improve this?

Currently there is no tuning APIs with nvof Gst-nvof — DeepStream documentation 6.4 documentation, maybe you can try to set different “preset-level” values with nvof to check whether it can help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.