Gstreamer rtsp stream save while preview error in jetson nano

Hi,

I am trying to encode a 1080p usb stream with audio and save a rtsp stream encoded in h264 at 1080p while previewing both. I tried the below command with jetson_clocks on but i get the below warning continuously and the encoded stream is heavily distorted.

WARNING: from element /GstPipeline:pipeline0/GstPulseSrc:pulsesrc0: Can’t record audio fast enough
Additional debug info:
gstaudiobasesrc.c(849): gst_audio_base_src_create (): /GstPipeline:pipeline0/GstPulseSrc:pulsesrc0:
Dropped 40572 samples. This is most likely because downstream can’t keep up and is consuming samples too slowly.

Below is the gstreamer command used

'gst-launch-1.0 -e v4l2src device=/dev/video1 ! tee name=t1 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! queue ! h264parse ! queue ! mux. pulsesrc device=“alsa_input.usb-VXIS_Inc_ezcap_U3_capture-02.analog-stereo” ! queue ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! audioconvert ! voaacenc ! aacparse ! mp4mux name=mux ! filesink location=feed1rtsp1080.mp4 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! “video/x-raw(memory:NVMM),width=503,height=250,framerate=30/1,format=NV12” ! queue ! nvoverlaysink overlay-x=0 overlay-y=50 overlay-w=503 overlay-h=250 overlay=1 rtspsrc location=rtsp://172.16.20.232:554/stream/main ! tee name=t2 t2. ! queue ! rtph264depay ! queue ! h264parse ! mp4mux ! filesink location=feed2rtsp1080.mp4 t2. ! queue ! rtph264depay ! queue ! h264parse ! queue ! nvv4l2decoder ! nvvidconv ! “video/x-raw(memory:NVMM),format=NV12” ! queue ! nvoverlaysink overlay-x=504 overlay-y=50 overlay-w=504 overlay-h=250 overlay=2`

I also tried the below command it works fine without preview of the rtsp stream

gst-launch-1.0 -e v4l2src device=/dev/video1 ! tee name=t1 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! queue ! h264parse ! queue ! mux. pulsesrc device=“alsa_input.usb-VXIS_Inc_ezcap_U3_capture-02.analog-stereo” ! queue ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! audioconvert ! voaacenc ! aacparse ! mp4mux name=mux ! filesink location=/mnt/00708148708144FE/jetson/feed1rtsp1080.mp4 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! “video/x-raw(memory:NVMM),width=503,height=250,framerate=30/1,format=NV12” ! queue ! nvoverlaysink overlay-x=0 overlay-y=50 overlay-w=503 overlay-h=250 overlay=1 rtspsrc location=rtsp://172.16.20.232:554/stream/main ! tee name=t2 t2. ! queue ! rtph264depay ! queue ! h264parse ! mp4mux ! filesink location=/mnt/00708148708144FE/jetson/feed2rtsp1080.mp4

I need to encode and save the 1080p usb stream with audio and save the rtspstream at 1080p while previewing both

Thank You

This might be related to your re-scaling. Be sure to somewhat preserve the frame-aspect-ratio, and better use resolution with width and height being dividable by 4 (avoid odds)

I just tried with a videotestsrc input and a test-launch rtsp server serving nvarguscamera with 640x480 resolution@30 fps.
Also adapted audio to alsasrc because I’m not familiar with pulseaudio.
I have to say that your initial pipeline has some potential to lead to red screen of the death…Had to reboot twice after these.
The following pipeline seems working (not tested audio). You may try it as starting point and check where it fails when adapting:

 gst-launch-1.0 -e videotestsrc is-live=true ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! tee name=t1 ! queue ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! queue ! h264parse ! queue ! mux.      alsasrc ! queue ! voaacenc ! mp4mux name=mux ! filesink location=feed1rtsp1080.mp4      t1. ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM), width=480, height=270, framerate=30/1, format=NV12' ! queue ! nvoverlaysink overlay-x=0 overlay-y=50 overlay-w=480 overlay-h=270 overlay=1    rtspsrc location=rtsp://127.0.0.1:8554/test ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! tee name=t2 ! queue ! h264parse ! mp4mux ! filesink location=feed2rtsp480.mp4     t2. ! queue ! h264parse ! nvv4l2decoder ! nvoverlaysink overlay-x=480 overlay-y=50 overlay-w=640 overlay-h=480 overlay=2

Thank You for your reply @Honey_Patouceul

I have tried above with 2 1080p USB streams in jetson nano it works fine without any errors. In this scenario gstreamer uses the internal encoder of jetson nano for video encode and uses the CPU only for saving audio.

When one input is rtsp since it is already encoded it uses the CPU for saving the issue arises only when I try to show the preview of the rtsp camera
below pipeline also works fine

gst-launch-1.0 -e v4l2src device=/dev/video1 ! tee name=t1 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! queue ! h264parse ! queue ! mux. pulsesrc device=“alsa_input.usb-VXIS_Inc_ezcap_U3_capture-02.analog-stereo” ! queue ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! audioconvert ! voaacenc ! aacparse ! mp4mux name=mux ! filesink location=feed1rtsp1080.mp4 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! “video/x-raw(memory:NVMM),width=480,height=270,framerate=30/1,format=NV12” ! queue ! nvoverlaysink overlay-x=0 overlay-y=50 overlay-w=480 overlay-h=270 overlay=1 rtspsrc location=rtsp://172.16.20.232:554/stream/main ! rtph264depay ! tee name=t2 t2. ! h264parse ! mp4mux ! filesink location=feed2rtsp1080.mp4 t2. ! queue ! rtph264depay ! queue ! h264parse ! queue ! nvv4l2decoder ! fakesink

but as soon as the nvoverlaysink preview for rtsp is introduced the encoding wont happen.the warning arises.

WARNING: from element /GstPipeline:pipeline0/GstPulseSrc:pulsesrc0: Can’t record audio fast enough
Additional debug info:
gstaudiobasesrc.c(849): gst_audio_base_src_create (): /GstPipeline:pipeline0/GstPulseSrc:pulsesrc0:
Dropped 40572 samples. This is most likely because downstream can’t keep up and is consuming samples too slowly.

The issue may be the cpu usage for both rtsp saving and audio saving

gst-launch-1.0 -e v4l2src device=/dev/video1 ! tee name=t1 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! queue ! h264parse ! queue ! mux. pulsesrc device=“alsa_input.usb-VXIS_Inc_ezcap_U3_capture-02.analog-stereo” ! queue ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! audioconvert ! voaacenc ! aacparse ! mp4mux name=mux ! filesink location=feed1rtsp1080.mp4 t1. ! queue ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! queue ! “video/x-raw(memory:NVMM),width=480,height=270,framerate=30/1,format=NV12” ! queue ! nvoverlaysink overlay-x=0 overlay-y=50 overlay-w=480 overlay-h=270 overlay=1 rtspsrc location=rtsp://172.16.20.232:554/stream/main ! rtph264depay ! tee name=t2 t2. ! h264parse ! mp4mux ! filesink location=feed2rtsp1080.mp4 t2. ! queue ! rtph264depay ! queue ! h264parse ! queue ! nvv4l2decoder ! nvoverlaysink overlay-x=481 overlay-y=50 overlay-w=480 overlay-h=270 overlay=2`

You would try increasing audio buffers so that it acccomodates with high latency of the pipeline. The following works for me:

gst-launch-1.0 -e \
mp4mux name=mux ! filesink location=feed1local_av1080.mp4 \
pulsesrc buffer-time=5000000 ! voaacenc ! queue ! mux.audio_0    \
videotestsrc is-live=true ! video/x-raw, width=1920, height=1080, framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! tee name=t1    \
t1. ! queue ! nvv4l2h264enc maxperf-enable=1 bitrate=4000000 profile=4 ! h264parse ! queue ! mux.video_0    \
t1. ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM), width=480, height=270, framerate=30/1, format=NV12' ! nvoverlaysink overlay-x=0 overlay-y=50 overlay-w=480 overlay-h=270 overlay=1    \
rtspsrc location=rtsp://127.0.0.1:8554/test ! application/x-rtp, media=video, encoding-name=H264 ! rtph264depay ! tee name=t2     \
t2. ! queue ! h264parse ! mp4mux ! filesink location=feed2rtsp1080.mp4     \
t2. ! queue ! h264parse ! nvv4l2decoder ! nvoverlaysink overlay-x=530 overlay-y=50 overlay-w=640 overlay-h=480 overlay=2

Thank You for the reply @Honey_Patouceul

I tried the above method now there is nor warning. But the encoded video is loosing a lot of frames.

Not sure how you measure that.
In my case, both:

gst-launch-1.0 -v filesrc location= ./feed1local_av1080.mp4 ! qtdemux name=demux     demux.video_0 ! queue ! h264parse ! nvv4l2decoder ! fpsdisplaysink video-sink=nvoverlaysink text-overlay=0   demux.audio_0 ! queue ! aacparse ! avdec_aac ! pulsesink

# and
gst-launch-1.0 -v filesrc location= ./feed2rtsp1080.mp4 ! qtdemux name=demux     demux.video_0 ! queue ! h264parse ! nvv4l2decoder ! fpsdisplaysink video-sink=nvoverlaysink text-overlay=0

show no dropped frame. Of course, rtsp stream may take 2 seconds to be available. You may try to reduce this to 200-500 ms with latency option of rtspsrc, depending on you camera connection.

Not sure this is related, but for completeness also note that I’m running this from a NVME rootfs.

1 Like

Thank you for the help @Honey_Patouceul, I will look into this