New NvStreammux shows 「[ERROR push 317] push failed [-5]」

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : dGPU、Jetson
• DeepStream Version : 6.0、6.1
• JetPack Version (valid for Jetson only) : 4.6(DS6.0)、5.0.2(DS6.1)
• TensorRT Version : 8.0.1(DS6.0)、8.4.1(DS6.1)
• NVIDIA GPU Driver Version (valid for GPU only) : 525.147.05
• Issue Type( questions, new requirements, bugs) : questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I want to use new nvstreammux element.
But I sometimes get this message, when I run this command to use new nvstreammux on dGPU(DS6.0、DS6.1) and Jetson(DS6.0、DS6.1) .
What does the message「[ERROR push 317] push failed [-5]」 mean?

I run this command in the docker container.

command (If you cannot reproduce it, run it again and again.)

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_0 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12" ! mux.sink_0 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_1 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_1 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_2 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_2 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_3 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_3 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_4 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_4 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_5 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_5 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_6 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_6 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_7 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_7 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_8 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_8 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_9 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_9 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_10 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_10 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_11 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_11 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_12 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_12 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_13 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_13 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_14 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_14 \
uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=source_15 ! tee ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=NV12"! mux.sink_15 \
nvstreammux name=mux batch-size=16 ! \
nvstreamdemux name=demux \
demux.src_0 ! fakesink async=false

DeepStream 6.0 output (dGPU)

max_fps_dur 8.33333e+06 min_fps_dur 2e+08
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
max_fps_dur 8.33333e+06 min_fps_dur 2e+08
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:26.270019690
Setting pipeline to PAUSED ...
[ERROR push 317] push failed [-5]
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
nvstreammux sync-inputs=1 name=mux batch-size=16 ! \
nvstreamdemux name=demux \
demux.src_0 ! fakesink sync=true

The property below new streammux is necessary

Thank you.
But I still have 4 questions.

  1. Does this 「[ERROR push 317] push failed [-5]」」 message affect the processing of the pipeline?

  2. I want to understand what 「[ERROR push 317] push failed [-5]」 means. So, tell me about the numbers 317 and -5.

  3. This may be covered by question2. Does It message appear, if I run the pipeline is running asynchronously with new nvstreammux?

  4. When I run deepstream-test5-app with new nvstreammux and stop by 「q」key on DS6.0(dGPU,Jetson) and DS6.1(dGPU,Jetson), I get those messages.
    As well as question1 and 2, Do these messages affect the processing of the pipeline? And What do the numbers 334, 315, and -2 mean?
    message

    • DS6.1(dGPU):[ERROR push 334] push failed [-2]
    • DS6.0(dGPU):[ERROR push 317] push failed [-2]
    • DS6.1(Jetson):[ERROR push 334] push failed [-2]
    • DS6.0(Jetson):[ERROR push 315] push failed [-2]

    I run this command in the docker container.

    I use this config file

    ################################################################################
    # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
    #
    # Permission is hereby granted, free of charge, to any person obtaining a
    # copy of this software and associated documentation files (the "Software"),
    # to deal in the Software without restriction, including without limitation
    # the rights to use, copy, modify, merge, publish, distribute, sublicense,
    # and/or sell copies of the Software, and to permit persons to whom the
    # Software is furnished to do so, subject to the following conditions:
    #
    # The above copyright notice and this permission notice shall be included in
    # all copies or substantial portions of the Software.
    #
    # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
    # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
    # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
    # DEALINGS IN THE SOFTWARE.
    ################################################################################
    
    [application]
    enable-perf-measurement=1
    perf-measurement-interval-sec=5
    #gie-kitti-output-dir=streamscl
    
    [tiled-display]
    enable=1
    rows=2
    columns=2
    width=1280
    height=720
    gpu-id=0
    #(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
    #(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
    #(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
    #(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
    #(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
    nvbuf-memory-type=0
    
    
    [source0]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI
    type=3
    uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
    num-sources=2
    gpu-id=0
    nvbuf-memory-type=0
    
    [source1]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI
    type=3
    uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
    num-sources=2
    gpu-id=0
    nvbuf-memory-type=0
    
    [sink0]
    enable=1
    #Type - 1=FakeSink 2=EglSink 3=File
    type=1
    sync=1
    source-id=0
    gpu-id=0
    nvbuf-memory-type=0
    
    [sink1]
    enable=0
    #Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
    type=6
    msg-conv-config=dstest5_msgconv_sample_config.txt
    #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
    #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
    #(256): PAYLOAD_RESERVED - Reserved type
    #(257): PAYLOAD_CUSTOM   - Custom schema payload
    msg-conv-payload-type=0
    msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
    #Provide your msg-broker-conn-str here
    msg-broker-conn-str=<host>;<port>;<topic>
    topic=<topic>
    #Optional:
    #msg-broker-config=../../deepstream-test4/cfg_kafka.txt
    
    [sink2]
    enable=0
    type=3
    #1=mp4 2=mkv
    container=1
    #1=h264 2=h265 3=mpeg4
    ## only SW mpeg4 is supported right now.
    codec=3
    sync=1
    bitrate=2000000
    output-file=out.mp4
    source-id=0
    
    # sink type = 6 by default creates msg converter + broker.
    # To use multiple brokers use this group for converter and use
    # sink type = 6 with disable-msgconv = 1
    [message-converter]
    enable=0
    msg-conv-config=dstest5_msgconv_sample_config.txt
    #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
    #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
    #(256): PAYLOAD_RESERVED - Reserved type
    #(257): PAYLOAD_CUSTOM   - Custom schema payload
    msg-conv-payload-type=0
    # Name of library having custom implementation.
    #msg-conv-msg2p-lib=<val>
    # Id of component in case only selected message to parse.
    #msg-conv-comp-id=<val>
    
    # Configure this group to enable cloud message consumer.
    [message-consumer0]
    enable=0
    proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
    conn-str=<host>;<port>
    config-file=<broker config file e.g. cfg_kafka.txt>
    subscribe-topic-list=<topic1>;<topic2>;<topicN>
    # Use this option if message has sensor name as id instead of index (0,1,2 etc.).
    #sensor-list-file=dstest5_msgconv_sample_config.txt
    
    [osd]
    enable=1
    gpu-id=0
    border-width=1
    text-size=15
    text-color=1;1;1;1;
    text-bg-color=0.3;0.3;0.3;1
    font=Arial
    show-clock=0
    clock-x-offset=800
    clock-y-offset=820
    clock-text-size=12
    clock-color=1;0;0;0
    nvbuf-memory-type=0
    
    [streammux]
    gpu-id=0
    ##Boolean property to inform muxer that sources are live
    live-source=0
    batch-size=4
    ##time out in usec, to wait after the first buffer is available
    ##to push the batch even if the complete batch is not formed
    batched-push-timeout=40000
    ## Set muxer output width and height
    width=1920
    height=1080
    ##Enable to maintain aspect ratio wrt source, and allow black borders, works
    ##along with width, height properties
    enable-padding=0
    nvbuf-memory-type=0
    ## If set to TRUE, system timestamp will be attached as ntp timestamp
    ## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
    # attach-sys-ts-as-ntp=1
    
    [primary-gie]
    enable=1
    gpu-id=0
    batch-size=4
    ## 0=FP32, 1=INT8, 2=FP16 mode
    bbox-border-color0=1;0;0;1
    bbox-border-color1=0;1;1;1
    bbox-border-color2=0;1;1;1
    bbox-border-color3=0;1;0;1
    nvbuf-memory-type=0
    interval=0
    gie-unique-id=1
    model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
    labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
    config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
    #infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/
    
    [tracker]
    enable=1
    # For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
    tracker-width=640
    tracker-height=384
    ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
    # ll-config-file required to set different tracker types
    # ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
    ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
    # ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
    # ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
    gpu-id=0
    enable-batch-process=1
    enable-past-frame=1
    display-tracking-id=1
    
    [tests]
    file-loop=0
    

    Command

    cd /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5/configs
    vi test5_config_file_src_infer.txt # Change config file
    USE_NEW_NVSTREAMMUX=yes deepstream-test5-app -c test5_config_file_src_infer.txt
    # Stop by q key
    

    Example of log(dGPU,6.0)

    max_fps_dur 8.33333e+06 min_fps_dur 2e+08
    
    (deepstream-test5-app:2401): GLib-GObject-WARNING **: 08:01:19.969: g_object_set_is_valid_property: object class 'GstNvStreamMux' has no property named 'buffer-pool-size'
    gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
    gstnvtracker: Batch processing is ON
    gstnvtracker: Past frame output is ON
    [NvMultiObjectTracker] Initialized
    0:00:01.574101596  2401 0x56183514b330 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
    INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
    0   INPUT  kFLOAT input_1         3x368x640       
    1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
    2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         
    
    0:00:01.574165430  2401 0x56183514b330 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
    0:00:01.574937372  2401 0x56183514b330 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/configs/deepstream-app/config_infer_primary.txt sucessfully
    
    Runtime commands:
            h: Print this help
            q: Quit
    
            p: Pause
            r: Resume
    
    NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
        To go back to the tiled display, right-click anywhere on the window.
    
    ** INFO: <bus_callback:194>: Pipeline ready
    
    max_fps_dur 8.33333e+06 min_fps_dur 2e+08
    ** INFO: <bus_callback:180>: Pipeline running
    
    WARNING; playback mode used with URI [file:///opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
    WARNING; playback mode used with URI [file:///opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
    WARNING; playback mode used with URI [file:///opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
    WARNING; playback mode used with URI [file:///opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test5/configs/../../../../../samples/streams/sample_1080p_h264.mp4] not conforming to timestamp format; check README; using system-time
    q
    Quitting
    [NvMultiObjectTracker] De-initialized
    [ERROR push 317] push failed [-2]
    App run successful
    

gst-nvmultistream2 is open source, You can refer to this function bool GstBatchBufferWrapper::push(SourcePad * src_pad, unsigned long pts)

https://gstreamer.freedesktop.org/documentation/gstreamer/gstinfo.html?gi-language=c#GST_ERROR_OBJECT

This represents the code line number

Represents the return value
https://gstreamer.freedesktop.org/documentation/gstreamer/gstpad.html?gi-language=c#enumerations

When I run the following command with DS6.1 using file source, the FPS goes to 0 before all data is inferred, and only the actual inferred seconds are output to the video.
When using the nvstreammux property sync-inputs, it seems to only work for live sources, can it be used for file sources?
https://forums.developer.nvidia.com/t/deepstream-audio-with-sync-inputs/290725/3?u=y.majima
If it is possible to use it, please let me know how to set it up correctly.

command

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 -e uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! m.sink_0 nvstreammux name=m batch-size=1 sync-inputs=1 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deeps
tream-app/config_infer_primary.txt ! nvdslogger ! nvvideoconvert ! nvdsosd ! nvstreamdemux name=demux demux.src_0 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=out.mp4
max_fps_dur 8.33333e+06 min_fps_dur 2e+08
Setting pipeline to PAUSED ...
0:00:02.480207329   754 0x5558e68ccd30 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:02.497471382   754 0x5558e68ccd30 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
0:00:02.503941560   754 0x5558e68ccd30 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Pipeline is PREROLLING ...
max_fps_dur 8.33333e+06 min_fps_dur 2e+08
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
**PERF : FPS_0 (0.00)
Got EOS from element "pipeline0".
Execution ended after 0:00:48.240668324
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Yes, of course.

Some tips

  1. sync-inputs is used to synchronize multiple inputs, so it is best to use multiple streams for testing.

  2. For local files, you need to increase the max-latency parameter, otherwise most frames will be discarded.

Try the following command line.

USE_NEW_NVSTREAMMUX=yes  gst-launch-1.0 -e uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! nvvideoconvert ! m.sink_0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! nvvideoconvert ! m.sink_1 nvstreammux name=m batch-size=2 sync-inputs=1 max-latency=33000000 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! nvdslogger fps-measurement-interval-sec=1 ! nvvideoconvert ! nvdsosd ! nvstreamdemux name=demux demux.src_0 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=out.mp4 sync=1 demux.src_1 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=out1.mp4 sync=1

I have 2 questions.

  1. Do I have to use sync-inputs if I run a pipeline with multiple inputs?
  2. Does this「[ERROR push 317] push failed [-5]」」 message of first comment affect the processing of the pipeline?
    I refered to function bool GstBatchBufferWrapper::push(SourcePad * src_pad, unsigned long pts). But I don’t understand if the message affect the processing of the pipeline.

Not required, this is just to force synchronization of multiple inputs

No impact. The problem mentioned in the first comment is just because you have 16 inputs but only 1 output.
Need to be like below

demux.src_0 ! fakesink 
demux.src_1 ! fakesink 
.......

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.