Output video file was corrupted when I use new nvstreammux

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : dGPU、Jetson
• DeepStream Version : 6.1.1
• JetPack Version (valid for Jetson only) : 5.0.2
• TensorRT Version : 8.4.1
• NVIDIA GPU Driver Version (valid for GPU only) : 525.147.05
• Issue Type( questions, new requirements, bugs) : questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I ran the deepstream-test5-app sample with the following configuration file using the new nvstreammux. The input was RTSP, so I entered the q key and terminated the application in the middle of the run.
When I checked the output video, the video file was corrupted.

  1. Is this a specification of the new nvstreammux?
  2. If it is a bug, how should I fix the source code?

I run this command in the docker container.

command

USE_NEW_NVSTREAMMUX=yes deepstream-test5-app -c test5_config_file_src_infer.txt

I use ths config file

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0 # change to disable
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4 # change to RTSP Type
uri=rtsp://<IP>:554/test.mpeg4 # change to RTSP
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4 # change to RTSP Type
uri=rtsp://<IP>:554/test.mpeg4 # change to RTSP
num-sources=2
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=0 # change to disable
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0 # change to disable
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=1 # change to enable
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=1 # change to 1
sync=1
bitrate=2000000
output-file=out1.mp4 # change to file name
source-id=0

[sink3] # add sink group
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=1
sync=1
bitrate=2000000
output-file=out2.mp4
source-id=1

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
#sensor-list-file=dstest5_msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=0

Please find, compare and modify the following code in the /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test5/deepstream_test5_app_main.c file

        case KeyRelease:
        {
          KeySym p, r, q;
          guint i;
          p = XKeysymToKeycode (display, XK_P);
          r = XKeysymToKeycode (display, XK_R);
          q = XKeysymToKeycode (display, XK_Q);
          if (e.xkey.keycode == p) {
            for (i = 0; i < num_instances; i++)
              pause_pipeline (appCtx[i]);
            break;
          }
          if (e.xkey.keycode == r) {
            for (i = 0; i < num_instances; i++)
              resume_pipeline (appCtx[i]);
            break;
          }
          if (e.xkey.keycode == q) {
            quit = TRUE;
            for (i = 0; i < num_instances; i++)
                 gst_element_send_event(appCtx[i]->pipeline.pipeline,gst_event_new_eos());
            g_main_loop_quit (main_loop);
          }
        }
          break;

I have modified the source code according to your source code.
In addition to that, I added the same process to the terminal keyboard input process.
However, the output video file was still corrupted.

I have a question.
Is the process of sending EOS to the sink pad of the nvstreamdemux in the destroy_pipeline in deepstream-app.c not supported when using the new nvstreamdemux?

When using the old nvstreammux with a probe to receive EOS, the process of sending EOS to the sink pad of the nvstreamdemux seems to work, which sends EOS to the downstream filesink.
However, when using the new nvstreammux, the process just sends EOS to the nvstreamdemux sink pad, but not to the downstream filesink.

I also tried the source code with your source code changes plus removing the process of sending EOS to the sink pad of nvstreamdemux in the destroy_pipeline function.
Then the output video file was not corrupted as expected.

It will work.

Glad to hear that.

Since applying only your source code modifications did not result in the expected behavior, is it okay to remove the process of sending EOS to the sink pad of nvstreamdemux in the destroy_pipeline function of deepstream-app.c?

If that process should not be removed, how should the modifications be made?

I’ve tested with the code I sent to you, it works.

The destroy_pipeline() will not impact anything else. You don’t need to remove any code.

When I execute it, output video file is corrupted.
Create an executable file with the modified source code.
After streaming RTSP, I executed the following command with DS6.1.1 and USE_NEW_NVSTREAMMUX set to ‘yes’.
In the configuration file, set RTSP as the input and output the video in H.264 MP4 format.
Then, during the application execution, I inputted the ‘q’ key in the terminal.

USE_NEW_NVSTREAMMUX=yes ../deepstream-test5-app -c test5_config_file_src_infer.txt

Did you execute it using the same environment, execution commands, and configuration files as mine?


My environment, execution commands, and configuration files are as described in my initial comment.

  • Environment

  • Command
    Create an executable file with the modified source code.
    After streaming RTSP, I executed the following command with USE_NEW_NVSTREAMMUX=yes.
    Then, during the application execution, I pressed the ‘q’ key in the terminal.

    USE_NEW_NVSTREAMMUX=yes ../deepstream-test5-app -c test5_config_file_src_infer.txt
    
  • Config file (test5_config_file_src_infer.txt)

    ・
    ・
    ・ 
    [tiled-display]
    enable=0 # change to disable
    
    [source0]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI
    type=4 # change to RTSP Type
    uri=rtsp://<IP>:554/test.mpeg4 # change to RTSP
    num-sources=2
    gpu-id=0
    nvbuf-memory-type=0
    
    [source1]
    enable=1
    #Type - 1=CameraV4L2 2=URI 3=MultiURI
    type=4 # change to RTSP Type
    uri=rtsp://<IP>:554/test.mpeg4 # change to RTSP
    num-sources=2
    gpu-id=0
    nvbuf-memory-type=0
    
    [sink0]
    enable=0 # change to disable
    #Type - 1=FakeSink 2=EglSink 3=File
    type=2
    
    [sink1]
    enable=0 # change to disable
    #Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
    type=6
    
    [sink2]
    enable=1 # change to enable
    type=3
    #1=mp4 2=mkv
    container=1
    #1=h264 2=h265 3=mpeg4
    ## only SW mpeg4 is supported right now.
    codec=1 # change to 1
    sync=1
    bitrate=2000000
    output-file=out1.mp4 # change to file name
    source-id=0
    
    [sink3] # add sink group
    enable=1
    type=3
    #1=mp4 2=mkv
    container=1
    #1=h264 2=h265 3=mpeg4
    ## only SW mpeg4 is supported right now.
    codec=1
    sync=1
    bitrate=2000000
    output-file=out2.mp4
    source-id=1
    ・
    ・
    ・ 
    

The issue can be reproduced only with “USE_NEW_NVSTREAMMUX=yes” and enable nvstreamdemux configuration. With the nvmultistreamtiler configuration, the saved mp4 file is OK.
Seems the issue is related with the new nvstreamdemux.

We will investigate it.

Thank you for investigating.
How is the investigate going?

The team is working on the issue.

How is the investigate going?

Please wait for the future release.