Synchronize output of different pipelines

• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 7.0
• JetPack Version (valid for Jetson only) : 6.0
• TensorRT Version : 8.6.2.3
• Issue Type( questions, new requirements, bugs) : questions

I have such a GStreamer pipeline writen in Python using DeepStream:

gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream-7.0/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 batched-push-timeout=4000000 ! tee name=t t. ! queue max-size-time=1000000000 ! nvv4l2h265enc iframeinterval=60 ! h265parse ! splitmuxsink location=/tmp/data/video_R%02d.h265 max-size-time=1000000000 \ t. ! queue max-size-time=1000000000 ! videorate ! capsfilter caps="video/x-raw(memory:NVMM), framerate=1/1, format=NV12" ! nvvideoconvert ! nvv4l2h264enc iframeinterval=1 ! h264parse ! splitmuxsink location=/tmp/data/video_R%02d.h264 max-size-time=1000000000

  1. H264 and H265 files are saved correctly; however, H265 files are saved faster. This means that when I stop the video at a random time, there are consistently more H265 files saved than H264 files. I want to achieve simultaneous saving of H264 and H265 files at a consistent interval of every 1 second.

For example, after 35 seconds of video, there should be 35 H265 files and 35 H264 files, but currently, there are 35 H265 files and only 30 H264 files with the Gstreamer pipeline above.

  1. Additionally, I want to save .txt files simultaneously, each containing some text. However, when I connected the probe function to the src pad of the h265parse element in the H265 branch, the .txt files were saved too quickly and in excessive quantities, and they were not synchronized with the H265 files. Should I create another branch from the tee to save these .txt files, or is there a better way to connect the probe function?

The probe function is simple it just writes current time to txt file:

def extract_time(pad, info, user_data):
    current_time = time.time()

    with open(f'/tmp/data/{current_time}.txt', 'w') as f:
        f.write(f'{current_time}')

    return Gst.PadProbeReturn.OK

To narrow down this issue, if there is only h265 branch in the pipeline, will there be 35 H265 files after r 35 seconds?

When I run:

timeout 35 gst-launch-1.0 ..

with the H265 pipeline, it actually renders 39 H265 files instead of the expected 35. It appears that files are not being generated every second as intended.

When I run the same command with the H264 pipeline, it renders 34 H264 files, which is one less than expected. This is likely because the timeout is set to 35 seconds, so the last file doesn’t have enough time to be saved.

How can I synchronize both pipelines to generate files exactly every 1 second at the same time?

Additionally, how can I save the .txt file mentioned earlier before saving the H264 and H265 files? Is there a way to specify the order of saving, or could I send some information in the probe function to achieve this?

you can use nvv4l2h265enc iframeinterval=1 to create the expected number files.

When I set framerate=60/1 in the caps filter, shouldn’t I also set iframeinterval to 60 as well? With iframeinterval set to 1, the video quality seems to drop because every next frame is iFrame, right?. What I want to achieve is generate iFrames every 60 frames.

I checked the documentation and there is written:

iframeinterval - Sets encoding intra-frame occurrence frequency. - Unsigned Integer

But what exactly is the frequency? Is it the number of frames after which an I-frame will be generated?

I’ve noticed that when the framerate is set to 60/1 and the iframeinterval is set to 60, the encoder occasionally drops some frames.

For example, when I run the given GStreamer command, after producing a few 1-second videos, there is sometimes a video that is too short and contains only a few frames. This seems to be an encoder error. Should I specify any additional properties for the encoder to ensure it works correctly?

When I set iframeinterval to 1, it worked as expected; however, every frame was encoded as an I-frame. I want to configure the encoder so that only the first frame is an I-frame and the subsequent frames are P-frames. How can I achieve this?


New update


If you run this command with videotestsrc, you will notice that the files vid_00.h264, vid_01.h264, vid_02.h264, and vid_03.h264, which are 1-second videos, are fine. However, vid_04.h264 is too short, which is inadequate and also vid_00.h264 is a little too long.

gst-launch-1.0 videotestsrc num-buffers=500 ! 'video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvv4l2h264enc iframeinterval=60 ! h264parse ! splitmuxsink location=vid_%02d.h264 max-size-time=1000000000

The result is as follows:

result

You can clearly see that vid_094.h264 is about five times smaller than the other files, which are around 500 KB each. Additionally, the first file, vid_00.h264, is slightly larger than expected. Could this be encoder issue? This situation, where one of the files is generated too small and too short, repeats periodically.

If you change num-buffers to, for example, 1300, you will notice that this situation repeats every 5 videos.

you can use nvv4l2h265enc idrinterval=60. the file number also is the expected number .

using idrinterval=60 can fix this issue.

So idrinterval=60 should be used together with iframeinterval=60 and the pipeline looks like:

... ! nvv4l2h265enc idrinterval=60 iframeinterval=60 ! ...

or just use idrinterval=60 without iframeinterval?

I think that just using idrinterval=60 alone does not fix the issue since video files have still different sizes.
Running the command with only idrinterval=60 generated 27 files when timeout was 20.

I tested the following pipeline. one 48 seconds file are converted to 48 files. there is no big difference of file size. log-0802.txt (283.8 KB)

gst-launch-1.0 -v filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1920 height=1080 batch-size=1 ! tee name=t t. ! nvv4l2h264enc idrinterval=30 ! h264parse ! splitmuxsink location=video_R%02d.h264 max-size-time=1000000000 
1 Like

Thank you for the solution.
Your pipeline works indeed and files are saved correctly; however, the framerate of sample_720p.h264 is 30/1.

When I apply your solution to my case, with a camera framerate of 60/1 and idrinterval set to 60, each video lasts 0.5 seconds instead of 1 second.

Update

After I set iframeinterval=60 and idrinterval=60 the videos are saved correctly and last 1 second. Is this the correct solution for camera/video that has framerate 60/1?

it depends on splitmuxsink’s implementation. I and IDR frame are different. AYK, I frame is intra encoding. IDR is a special I frame.

Ok, so I tested this pipeline:

gst-launch-1.0 -v filesrc location=60_fps_tester.h264 ! 'video/x-h264, width=1920, height=1080, framerate=60/1' ! h264parse ! nvv4l2decoder ! tee name=t t. ! nvv4l2h265enc iframeinterval=60 idrinterval=60 ! h265parse ! splitmuxsink location=data/video_R%02d.h265 max-size-time=1000000000

on some test 60 FPS video that I downloaded from the YT and converted to h264: https://www.youtube.com/watch?v=Cyxixzi2dgQ

and this pipeline works correct. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.