Video Processing stops with Software Encoder

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version: 6.4
• NVIDIA GPU Driver Version (valid for GPU only): NVIDIA GeForce GTX 1650 / Driver Version: 525.147.05 / CUDA Version: 12.0
• Issue Type( question)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am building a pipeline which will be consuming upto16 video streams and dump them into HLS files and send the detections to MQTT. I am now trying to build a test setup with one video. I know, that not all of the video streams can be encoded by Hardware as the number of encoders on GPU are limited. Therefore, I am now trying to build a working pipeline with a hardware encoder (nvv4l2h264enc) and sowtware encoder (x264enc).

I was followilng the examples from GitHub but now I am stuck as the Software encoder pauses the video processing.

So the working pipeline with a hardware encoder looks as follows:

pdf:
graph_w_hw_encoder.pdf (24.0 KB)

and the code:


def main(args):

    # Standard GStreamer initialization
    Gst.init(None)

    ##############################################################################################
    ### Start parsing and check config file
    ##############################################################################################

    # Parse config file
    config_file = args.config
    config = yaml.safe_load(config_file.open("r", encoding="utf-8"))
    number_sources = len(config["streams"])
    print(f"Number of sources connected {number_sources}")

    stream_names = [stream["name"] for stream in config["streams"]]
    print('\n'.join(f"{i}: {name}" for i, name in enumerate(stream_names)))

    ##############################################################################################
    ### Pipeline
    ##############################################################################################

    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")


    ##############################################################################################
    ### Elements
    ##############################################################################################
        
    # Create File Source
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
    
    print('Input uri', config["streams"][0]['camera-stream-url'])

    # Take just one videostream as 0
    source.set_property('location', config["streams"][0]['camera-stream-url'])
    
    ##############################################################################################

    # Since the data format in the input file is elementary h264 stream, we need a h264parser
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    ##############################################################################################
        
    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    ##############################################################################################

    # Although 1, it must be there! Cannot link decoder with pgie directly!  
    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    streammux.set_property('batch-size', 1)
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    # streammux.set_property('live-source', 1)

    ##############################################################################################

    nvstreamdemux = Gst.ElementFactory.make("nvstreamdemux", "nvstreamdemux")
    if not nvstreamdemux:
        sys.stderr.write(" Unable to create NvStreamDemux \n")

    ##############################################################################################

    # Use nvinfer to run inferencing on decoder's output, behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    pgie.set_property('config-file-path', "./deepstream/models/dstest1_pgie_config.txt")

    ##############################################################################################

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    preosd_nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "preosd_nvvidconv")
    if not preosd_nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    postosd_nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "postosd_nvvidconv")
    if not postosd_nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    ##############################################################################################
        
    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    ##############################################################################################

    proto_lib = '/opt/nvidia/deepstream/deepstream/lib/libnvds_mqtt_proto.so'
    conn_str = "localhost;1883;wrs"
    cfg_file = '/opt/nvidia/deepstream/deepstream/samples/WRS_AI_Application/deepstream/cfg_mqtt.txt'

    msgbroker = Gst.ElementFactory.make("nvmsgbroker", "nvmsg-broker")

    msgbroker.set_property('proto-lib', proto_lib)
    msgbroker.set_property('conn-str', conn_str)
    msgbroker.set_property('config', cfg_file)
    # msgbroker.set_property('sync', False)

    if not msgbroker:
        sys.stderr.write("Unable to create msgbroker \n")

    ##############################################################################################

    # print('Output uri', config["streams"][0]['user-stream-directory'])

    stream_directory = pathlib.Path(config["streams"][0]["user-stream-directory"])
    stream_directory.mkdir(exist_ok=True)

    # Full Resolution Detection Stream
    output_directory = stream_directory / "detection"
    output_directory.mkdir(exist_ok=True)

    ##############################################################################################

    framerate = 25
    keyframe_secs = 4
    bitrate = 4000

    ####

    hw_encoder = Gst.ElementFactory.make("nvv4l2h264enc", "hw_encoder")
    if not hw_encoder:
        sys.stderr.write(" Unable to create hw_encoder \n")

    hw_encoder.set_property('iframeinterval', keyframe_secs)
    hw_encoder.set_property('bitrate', bitrate*1000)

    ####

    x264enc = Gst.ElementFactory.make("x264enc", "x264enc")
    if not x264enc:
        sys.stderr.write(" Unable to create x264enc \n")

    x264enc.set_property('key-int-max', framerate*keyframe_secs)
    x264enc.set_property('bitrate', bitrate)

    ##############################################################################################

    h264parse = Gst.ElementFactory.make("h264parse")
    if not h264parse:
        sys.stderr.write("Unable to create h264parse \n")

    # If this config is not set, the stream won't be saved in chunks!
    h264parse.set_property("config-interval", -1) # Send SPS and PPS Insertion Interval in seconds

    ##############################################################################################

    caps_string_raw = (f"video/x-raw, format=I420, width={1280}, height={720}")
    caps_raw = Gst.caps_from_string(caps_string_raw)

    capsfilter_raw = Gst.ElementFactory.make("capsfilter", "capsfilter_raw")

    if not capsfilter_raw:
        sys.stderr.write("Unable to create capsfilter \n") 
    
    capsfilter_raw.set_property("caps", caps_raw)

    ##############################################################################################

    max_recording_time = 2000

    # https://gstreamer.freedesktop.org/documentation/hls/hlssink2.html?gi-language=c
    sink = Gst.ElementFactory.make("hlssink2", "hlssink2")
    if not sink:
        sys.stderr.write("Unable to create hlssink2 sink \n")

    sink.set_property('send_keyframe_requests', False)
    sink.set_property('target_duration', keyframe_secs)
    sink.set_property('playlist_length', max_recording_time // keyframe_secs)
    sink.set_property('max_files', max_recording_time // keyframe_secs)
    sink.set_property('location', f"{output_directory}/%05d.ts")
    sink.set_property('playlist_location', f"{output_directory}/playlist.m3u8")
    

    ##############################################################################################

    msgconv = Gst.ElementFactory.make("nvmsgconv", "nvmsg-converter")
    if not msgconv:
        sys.stderr.write(" Unable to create msgconv \n")
    
    msgconv.set_property('config', '/opt/nvidia/deepstream/deepstream/samples/WRS_AI_Application/deepstream/cfg_msgconv.txt')
    msgconv.set_property('payload-type', 0)

    tee = Gst.ElementFactory.make("tee", "nvsink-tee")
    if not tee:
        sys.stderr.write("Unable to create tee \n")

    queue1 = Gst.ElementFactory.make("queue", "nvtee-que1")
    if not queue1:
        sys.stderr.write("Unable to create queue1 \n")

    queue2 = Gst.ElementFactory.make("queue", "nvtee-que2")
    if not queue2:
        sys.stderr.write("Unable to create queue2 \n")

 
    ##############################################################################################
    ### Add Elements to Pipeline and link them
    ##############################################################################################
    
    print("Adding elements to Pipeline \n")

    # Add all elements to pipeline
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)

    pipeline.add(preosd_nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(postosd_nvvidconv)
    
    pipeline.add(tee)
    pipeline.add(queue1)
    pipeline.add(queue2)

    pipeline.add(msgconv)
    pipeline.add(msgbroker)

    pipeline.add(hw_encoder)
    pipeline.add(h264parse)

    # pipeline.add(nvstreamdemux)
    # pipeline.add(capsfilter_nvmm)
    pipeline.add(capsfilter_raw)
    pipeline.add(x264enc)

    pipeline.add(sink)

    # Link them all together
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to get source pad of decoder \n")

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write("Unable to get the sink pad of streammux \n")

    srcpad.link(sinkpad)
    streammux.link(pgie)
    
    pgie.link(preosd_nvvidconv)
    preosd_nvvidconv.link(nvosd)

    nvosd.link(tee)

    queue1.link(msgconv)
    msgconv.link(msgbroker)

    # HARDWARE ENCODER  ####
    queue2.link(hw_encoder)
    hw_encoder.link(h264parse)
    h264parse.link(sink)
    ########################

    # SOFTWARE ENCODER  ####
    # queue2.link(postosd_nvvidconv)
    # postosd_nvvidconv.link(capsfilter_raw)
    # capsfilter_raw.link(x264enc)
    # x264enc.link(h264parse)
    # h264parse.link(sink)
    ################
    
    queue1_sink_pad = queue1.get_static_pad("sink")
    queue2_sink_pad = queue2.get_static_pad("sink")

    tee_msg_pad = tee.get_request_pad('src_0')
    tee_render_pad = tee.get_request_pad("src_1")

    if not tee_msg_pad or not tee_render_pad:
        sys.stderr.write("Unable to get request pads\n")

    tee_msg_pad.link(queue1_sink_pad)
    tee_render_pad.link(queue2_sink_pad)


    # msgbroker.link(sink)


    ##############################################################################################
    ### Initiate Pipeline
    ##############################################################################################

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)


    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    ##############################################################################################
    ### Start Pipeline
    ##############################################################################################

    # List the sources
    print("Now playing...")
    # for i, source in enumerate(number_sources):
    #     print(i, ": ", source)

    print("Starting pipeline \n")
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)

    try:

        if dot_dir := os.environ.get("GST_DEBUG_DUMP_DOT_DIR", None):
            dot_file = pathlib.Path(dot_dir) / "pipeline-graph.dot"
            print(f"Save pipeline graph to {dot_file}")
            Gst.debug_bin_to_dot_file(
                pipeline, Gst.DebugGraphDetails.NON_DEFAULT_PARAMS, dot_file.stem
            )
        loop.run()

    except:
        pass
    # cleanup
    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)


And this is working. However, if comment the Part with “Hardware Encoder” and uncomment the part with “Software Encoder”, the pipeline stops after 3 or 4 frames with no error message.

On GST_DEBUG Level 5 all I see that makes sens is the last line:

0:00:06.779105920   140 0x55b1a87029e0 DEBUG         queue_dataflow gstqueue.c:1520:gst_queue_loop:<nvtee-que2> queue is empty

Can you maybe help to understand why the video processing is pausing and not worrking?

many thanks in advance!

  1. DS 6.4 needs CUDA 12.2 and R535.104.12. If the versions do not match, there may be some compatibility issues. dGPU model Platform and OS Compatibility.
  2. Could you modify your source code that do not add plugins to the pipeline if they are not used?

Ok, I think I might need to add an information…

I am running directly in docker of deepstream 6.4.

I checked via “ls /usr/local” and I have cuda 12.2?

bin  cuda  cuda-12  cuda-12.2  dcgm  etc  games  include  lib  man  mpi  sbin  share  src  ucx

nvidia-smi still gets this:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05   Driver Version: 525.147.05   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0  On |                  N/A |
| N/A   54C    P0    15W /  50W |    134MiB /  4096MiB |     32%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

but if I do nvcc --version, I get:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jul_11_02:20:44_PDT_2023
Cuda compilation tools, release 12.2, V12.2.128
Build cuda_12.2.r12.2/compiler.33053471_0

So this should be ok, right?

And another question, why is this important when I want to use a software encoder?

No. The driver in the docker is dependent on your host.

Not just the encoder, but every gstreamer plugin for nv uses the cuda acceleration.

I updated the CUDA but it did not help…

The process gets stuck with the same message as described above…

I have now:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07             Driver Version: 535.161.07   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1650        On  | 00000000:01:00.0 Off |                  N/A |
| N/A   46C    P0              14W /  50W |   2055MiB /  4096MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A      2527      G   /usr/lib/xorg/Xorg                            4MiB |
|    0   N/A  N/A      9730      C   python3                                     682MiB |
|    0   N/A  N/A     10176      C   python3                                     682MiB |
|    0   N/A  N/A     10616      C   python3                                     682MiB |
+---------------------------------------------------------------------------------------+

I also modified the code which looks as follows:


def main(args):

    # Standard GStreamer initialization
    Gst.init(None)

    ##############################################################################################
    ### Start parsing and check config file
    ##############################################################################################

    # Parse config file
    config_file = args.config
    config = yaml.safe_load(config_file.open("r", encoding="utf-8"))
    number_sources = len(config["streams"])
    print(f"Number of sources connected {number_sources}")

    stream_names = [stream["name"] for stream in config["streams"]]
    print('\n'.join(f"{i}: {name}" for i, name in enumerate(stream_names)))

    ##############################################################################################
    ### Pipeline
    ##############################################################################################

    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")


    ##############################################################################################
    ### Elements
    ##############################################################################################
        
    # Create File Source
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
    
    print('Input uri', config["streams"][0]['camera-stream-url'])

    # Take just one videostream as 0
    source.set_property('location', config["streams"][0]['camera-stream-url'])
    
    ##############################################################################################

    # Since the data format in the input file is elementary h264 stream, we need a h264parser
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    ##############################################################################################
        
    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    ##############################################################################################

    # Although 1, it must be there! Cannot link decoder with pgie directly!  
    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    streammux.set_property('batch-size', 1)
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    # streammux.set_property('live-source', 1)

    ##############################################################################################

    nvstreamdemux = Gst.ElementFactory.make("nvstreamdemux", "nvstreamdemux")
    if not nvstreamdemux:
        sys.stderr.write(" Unable to create NvStreamDemux \n")

    ##############################################################################################

    # Use nvinfer to run inferencing on decoder's output, behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    pgie.set_property('config-file-path', "./deepstream/models/dstest1_pgie_config.txt")

    ##############################################################################################

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    preosd_nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "preosd_nvvidconv")
    if not preosd_nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    postosd_nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "postosd_nvvidconv")
    if not postosd_nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    ##############################################################################################
        
    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    ##############################################################################################

    proto_lib = '/opt/nvidia/deepstream/deepstream/lib/libnvds_mqtt_proto.so'
    conn_str = "localhost;1883;wrs"
    cfg_file = '/opt/nvidia/deepstream/deepstream/samples/WRS_AI_Application/deepstream/cfg_mqtt.txt'

    msgbroker = Gst.ElementFactory.make("nvmsgbroker", "nvmsg-broker")

    msgbroker.set_property('proto-lib', proto_lib)
    msgbroker.set_property('conn-str', conn_str)
    msgbroker.set_property('config', cfg_file)
    # msgbroker.set_property('sync', False)

    if not msgbroker:
        sys.stderr.write("Unable to create msgbroker \n")

    ##############################################################################################

    stream_directory = pathlib.Path(config["streams"][0]["user-stream-directory"])
    stream_directory.mkdir(exist_ok=True)

    # Full Resolution Detection Stream
    output_directory = stream_directory / "detection"
    output_directory.mkdir(exist_ok=True)

    ##############################################################################################

    framerate = 25
    keyframe_secs = 4
    bitrate = 4000

    ####

    x264enc = Gst.ElementFactory.make("x264enc", "x264enc")
    if not x264enc:
        sys.stderr.write(" Unable to create x264enc \n")

    x264enc.set_property('key-int-max', framerate*keyframe_secs)
    x264enc.set_property('bitrate', bitrate)

    ##############################################################################################

    h264parse = Gst.ElementFactory.make("h264parse")
    if not h264parse:
        sys.stderr.write("Unable to create h264parse \n")

    # If this config is not set, the stream won't be saved in chunks!
    h264parse.set_property("config-interval", -1) # Send SPS and PPS Insertion Interval in seconds

    ##############################################################################################

    caps_string_raw = (f"video/x-raw, format=I420, width={1280}, height={720}")
    caps_raw = Gst.caps_from_string(caps_string_raw)

    capsfilter_raw = Gst.ElementFactory.make("capsfilter", "capsfilter_raw")

    if not capsfilter_raw:
        sys.stderr.write("Unable to create capsfilter \n") 
    
    capsfilter_raw.set_property("caps", caps_raw)

    ##############################################################################################

    max_recording_time = 2000

    # https://gstreamer.freedesktop.org/documentation/hls/hlssink2.html?gi-language=c
    sink = Gst.ElementFactory.make("hlssink2", "hlssink2")
    if not sink:
        sys.stderr.write("Unable to create hlssink2 sink \n")

    sink.set_property('send_keyframe_requests', False)
    sink.set_property('target_duration', keyframe_secs)
    sink.set_property('playlist_length', max_recording_time // keyframe_secs)
    sink.set_property('max_files', max_recording_time // keyframe_secs)
    sink.set_property('location', f"{output_directory}/%05d.ts")
    sink.set_property('playlist_location', f"{output_directory}/playlist.m3u8")
        ##############################################################################################

    msgconv = Gst.ElementFactory.make("nvmsgconv", "nvmsg-converter")
    if not msgconv:
        sys.stderr.write(" Unable to create msgconv \n")
    
    msgconv.set_property('config', '/opt/nvidia/deepstream/deepstream/samples/WRS_AI_Application/deepstream/cfg_msgconv.txt')
    msgconv.set_property('payload-type', 0)

    tee = Gst.ElementFactory.make("tee", "nvsink-tee")
    if not tee:
        sys.stderr.write("Unable to create tee \n")

    queue1 = Gst.ElementFactory.make("queue", "nvtee-que1")
    if not queue1:
        sys.stderr.write("Unable to create queue1 \n")

    queue2 = Gst.ElementFactory.make("queue", "nvtee-que2")
    if not queue2:
        sys.stderr.write("Unable to create queue2 \n")
 
    ##############################################################################################
    ### Add Elements to Pipeline and link them
    ##############################################################################################
    
    print("Adding elements to Pipeline \n")

    # Add all elements to pipeline
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)

    pipeline.add(preosd_nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(postosd_nvvidconv)
    
    pipeline.add(tee)
    pipeline.add(queue1)
    pipeline.add(queue2)

    pipeline.add(msgconv)
    pipeline.add(msgbroker)

    pipeline.add(h264parse)
    pipeline.add(capsfilter_raw)
    pipeline.add(x264enc)

    pipeline.add(sink)

    # Link them all together
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to get source pad of decoder \n")

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write("Unable to get the sink pad of streammux \n")

    srcpad.link(sinkpad)
    streammux.link(pgie)
    
    pgie.link(preosd_nvvidconv)
    preosd_nvvidconv.link(nvosd)

    nvosd.link(tee)

    queue1.link(msgconv)
    msgconv.link(msgbroker)

    # HARDWARE ENCODER  ####
    # queue2.link(hw_encoder)
    # hw_encoder.link(h264parse)
    # h264parse.link(sink)
    ########################

    # SOFTWARE ENCODER  ####
    queue2.link(postosd_nvvidconv)
    postosd_nvvidconv.link(capsfilter_raw)
    capsfilter_raw.link(x264enc)
    x264enc.link(h264parse)
    h264parse.link(sink)
    ################
    
    queue1_sink_pad = queue1.get_static_pad("sink")
    queue2_sink_pad = queue2.get_static_pad("sink")

    tee_msg_pad = tee.get_request_pad('src_0')
    tee_render_pad = tee.get_request_pad("src_1")

    if not tee_msg_pad or not tee_render_pad:
        sys.stderr.write("Unable to get request pads\n")

    tee_msg_pad.link(queue1_sink_pad)
    tee_render_pad.link(queue2_sink_pad)

    ##############################################################################################
    ### Initiate Pipeline
    ##############################################################################################

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)


    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    ##############################################################################################
    ### Start Pipeline
    ##############################################################################################

    # List the sources
    print("Now playing...")
    # for i, source in enumerate(number_sources):
    #     print(i, ": ", source)

    print("Starting pipeline \n")
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)

    try:

        if dot_dir := os.environ.get("GST_DEBUG_DUMP_DOT_DIR", None):
            dot_file = pathlib.Path(dot_dir) / "pipeline-graph.dot"
            print(f"Save pipeline graph to {dot_file}")
            Gst.debug_bin_to_dot_file(
                pipeline, Gst.DebugGraphDetails.NON_DEFAULT_PARAMS, dot_file.stem
            )
        loop.run()

    except:
        pass
    # cleanup
    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)


Can you debug this problem step by step? You can follow the following two steps to figure out which plugin is causing the problem.

  1. Remove the msgconv and msgbroker branch
  2. Change the pipeline plugin from back to front to fakesink

I removed the msgbroker branch and in deed it worked.

I did not really understood what you mean by point 2, sorry… “back to front”?

I replaced the sink method by:

    sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

but it did not work…

If add the queue back for msgbroker, on GST_DEBUG_5 it is still showing same message:

0:00:06.779105920   140 0x55b1a87029e0 DEBUG         queue_dataflow gstqueue.c:1520:gst_queue_loop:<nvtee-que2> queue is empty

That means the msgconv and msgbroker branch is causing the problem. Could you try to refer to our demo deepstream_test_4.py and run that first?
There may be a problem with your broker server.

I would doubt, as with the hardware encoder this code works with mqtt branch … it is only not working when I am using the software enoder part.

If I structure the code like this:

    # HARDWARE ENCODER  ####
    queue2.link(hw_encoder)
    hw_encoder.link(h264parse)
    h264parse.link(sink)
    ########################

    # SOFTWARE ENCODER  ####
    # queue2.link(postosd_nvvidconv)
    # postosd_nvvidconv.link(capsfilter_raw)
    # capsfilter_raw.link(x264enc)
    # x264enc.link(h264parse)
    # h264parse.link(sink)
    ################

it works… How can it be?

And I was fully referring to the test4 example and made everything like there.

OK. Then you can use the 2nd method I suggested to briefly locate which plugin is causing the problem.

1st round:  change the sink to fakesink
2nd round: change the h264parse to fakesink
3rd round: change: the x264enc to faksink
......

Many thanks for advice.

1st round: same error
2nd round: same error
3rd round: no error - it works.

What should I do next? :)

OK. This is clearly an x264enc compatibility issue. You can try to configure some parameters of x264enc like speed-preset, sps-id and so on.

For anyone encountering same issue, I resolved it by adding following property:

x264enc.set_property('tune', 'zerolatency') 
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.