Hi,
I’m facing a strange issue with GStreamer on Jetson Nano.
My application should record streaming video with audio to H264 encoded MKV file, while also saving a few seconds of video before the user triggers recording. For this I have a constantly running pipeline where I block the src pads on the queue elements(blocking_audio_queue and blocking_video_queue) before the mux.
My pipeline:
matroskamux name=output_queue_mux\
! filesink name=output_queue_filesink location=/data/videos_staging/0_1683496385.mkv\
udpsrc multicast-group=0.0.0.0 auto-multicast=true port=56001 name=input_queue_video_udpsrc\
! application/x-rtp,media=video,encoding-name=H264,payload=96,clock-rate=90000,packetization-mode=1,profile-level-id=424015,sprop-parameter-sets="Z0JAKJWgHgCJ+VA=,aM48gA==",src=1728323247,timestamp-offset=2499875162,seqnum-offset=11758,a-framerate=30\
! rtpjitterbuffer name=input_queue_video_rtpjitterbuffer ! rtph264depay name=input_queue_video_rtpdepay\
! h264parse name=input_queue_video_parse ! queue name=blocking_video_queue\
! omxh264dec ! videoconvert ! omxh264enc bitrate=8000000 control-rate=2 insert-sps-pps=true\
! output_queue_mux.\
udpsrc multicast-group=0.0.0.0 auto-multicast=true port=51000 name=input_queue_audio_udpsrc\
! application/x-rtp,media=audio,clock-rate=44100,encoding-name=L24,encoding-params=1,channels=1,payload=96,ssrc=687131883,timestamp-offset=3784732336,seqnum-offset=8272\
! rtpL24depay name=input_queue_audio_rtpL24depay\
! audioconvert name=input_queue_audio_audioconvert\
! input_queue_audio_adder.
audiotestsrc wave=silence name=input_queue_audio_audiotestsrc\
! audiomixer name=input_queue_audio_adder\
! audioresample name=input_queue_audio_audioresample ! voaacenc name=input_queue_audio_voaacenc\
! queue name=blocking_audio_queue ! output_queue_mux.
My source pipeline which generates the video stream:
nvarguscamerasrc sensor-id=0 sensor-mode=0 exposuretimerange=135000 2600000 gainrange=1 16 ispdigitalgainrange=1 5 name=dcd_nvarguscamerasrc\
! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1\
! nvvidconv ! nvivafilter cuda-process=true customer-lib-name=libdcd_overlay_1080p.so ! video/x-raw(memory:NVMM), format=(string)NV12\
! nvvidconv ! omxh264enc bitrate=8000000 control-rate=2 insert-sps-pps=true\
! rtph264pay mtu=1400 ! udpsink auto-multicast=true clients=192.168.100.101:56000
My problem with the recording pipeline was that the generated video length was the absolute running time of the pipeline, so in case the pipeline was running for an hour and the user started a one minute recording at the end, then the produced video file was one hour long with 59 minutes of emptiness (frozen frame at the beginning) and with only one minute of real data at the end.
To overcome this I’m adjusting the offset on the audio and video src pads on the muxer.
And here comes the strange thing: this time offsetting trick works only if I re-encode the video so doing an additional: “omxh264dec ! videoconvert ! omxh264enc bitrate=8000000 control-rate=2 insert-sps-pps=true” which should be redundant (if I don’t have it the video still fine besides the messed up length), and also wasting resources here.
So I’m guessing that the omxh264enc is adding some secret sauce which I’m not aware of (I was not able to find any meaningful thing in GST INFO log and got lost in the DEBUG).
My goal would be to remove this re-encoding step.
Do you have any idea what would be the difference when the re-encoding is not there? Or is there any option in the rest of the pipeline that would help?
Thank you!
Bests,
Peter