GStreamer pad offsets are only applied if stream is re-encoded

Hi,

I’m facing a strange issue with GStreamer on Jetson Nano.

My application should record streaming video with audio to H264 encoded MKV file, while also saving a few seconds of video before the user triggers recording. For this I have a constantly running pipeline where I block the src pads on the queue elements(blocking_audio_queue and blocking_video_queue) before the mux.

My pipeline:

matroskamux name=output_queue_mux\
    ! filesink name=output_queue_filesink location=/data/videos_staging/0_1683496385.mkv\
udpsrc multicast-group=0.0.0.0 auto-multicast=true port=56001 name=input_queue_video_udpsrc\
    ! application/x-rtp,media=video,encoding-name=H264,payload=96,clock-rate=90000,packetization-mode=1,profile-level-id=424015,sprop-parameter-sets="Z0JAKJWgHgCJ+VA=,aM48gA==",src=1728323247,timestamp-offset=2499875162,seqnum-offset=11758,a-framerate=30\
    ! rtpjitterbuffer name=input_queue_video_rtpjitterbuffer ! rtph264depay name=input_queue_video_rtpdepay\
    ! h264parse name=input_queue_video_parse ! queue name=blocking_video_queue\
    ! omxh264dec ! videoconvert ! omxh264enc bitrate=8000000 control-rate=2 insert-sps-pps=true\
    ! output_queue_mux.\
udpsrc multicast-group=0.0.0.0 auto-multicast=true port=51000 name=input_queue_audio_udpsrc\
    ! application/x-rtp,media=audio,clock-rate=44100,encoding-name=L24,encoding-params=1,channels=1,payload=96,ssrc=687131883,timestamp-offset=3784732336,seqnum-offset=8272\
    ! rtpL24depay name=input_queue_audio_rtpL24depay\
    ! audioconvert name=input_queue_audio_audioconvert\
    ! input_queue_audio_adder. 
audiotestsrc wave=silence name=input_queue_audio_audiotestsrc\
    ! audiomixer name=input_queue_audio_adder\
    ! audioresample name=input_queue_audio_audioresample ! voaacenc name=input_queue_audio_voaacenc\
    ! queue name=blocking_audio_queue ! output_queue_mux.

My source pipeline which generates the video stream:

nvarguscamerasrc sensor-id=0 sensor-mode=0 exposuretimerange=135000 2600000 gainrange=1 16 ispdigitalgainrange=1 5 name=dcd_nvarguscamerasrc\
    ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1\
    ! nvvidconv ! nvivafilter cuda-process=true customer-lib-name=libdcd_overlay_1080p.so ! video/x-raw(memory:NVMM), format=(string)NV12\
    ! nvvidconv ! omxh264enc bitrate=8000000 control-rate=2 insert-sps-pps=true\
    ! rtph264pay mtu=1400 ! udpsink auto-multicast=true clients=192.168.100.101:56000

My problem with the recording pipeline was that the generated video length was the absolute running time of the pipeline, so in case the pipeline was running for an hour and the user started a one minute recording at the end, then the produced video file was one hour long with 59 minutes of emptiness (frozen frame at the beginning) and with only one minute of real data at the end.

To overcome this I’m adjusting the offset on the audio and video src pads on the muxer.

And here comes the strange thing: this time offsetting trick works only if I re-encode the video so doing an additional: “omxh264dec ! videoconvert ! omxh264enc bitrate=8000000 control-rate=2 insert-sps-pps=true” which should be redundant (if I don’t have it the video still fine besides the messed up length), and also wasting resources here.

So I’m guessing that the omxh264enc is adding some secret sauce which I’m not aware of (I was not able to find any meaningful thing in GST INFO log and got lost in the DEBUG).

My goal would be to remove this re-encoding step.

Do you have any idea what would be the difference when the re-encoding is not there? Or is there any option in the rest of the pipeline that would help?

Thank you!
Bests,
Peter

I have thought about hlssink and changing the max-files value for “pre-recording”. But this method is not cli friendly and am not sure what other issues you may face

Hi, yeah unfortunately I have strict requirements about the pre-recording (number seconds should configurable etc) therefore periodic file writers won’t be flexible enough.

To elaborate more on why I’m thinking the re-encoding is doing some magic in the background:

  • Im blocking the queue named blocking_video_queue just before the omxh264dec element
  • and I will reset the timestamp offset right after the omxh264enc in the output_queue_mux matroskamux

Therefore the buffers are piling up just before the re-encoder step, so when I unblock the queue those will be those will be the first buffers which are getting processed by the omxh264dec and omxh264enc, these buffers will be the first one which are reaching these elements, the omxh264dec and omxh264enc are not processed any buffers before this point of time.

This is why I suspect that in this re-encoding phase one of these elements (omxh264dec, videoconvert, omxh264enc) must be reseting some additional timestamp or some internal sequence number, which Im not aware of. But if I get to know that what is the property that I should reset, I can set this by myself and I can get rid os this redundant “re-encoding” step.

Hi,
We have deprecated omx plugins. suggest use v4l2 plugins like nvv4l2h264enc, and nvv4l2decoder plugins, Also the plugins are open source. For further debugging, you can add debug prints to print out information of timestamps and manually rebuild the plugins.

The source code is in
Jetson Linux R32.7.3 | NVIDIA Developer
Driver Package (BSP) Sources

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.