We’ve been experimenting with Deepstream and GStreamer pipelines on our Jetson Xavier and Nano devices, and are running into some technical issues/questions about how to properly leverage GPU-frame decoding/conversion. We’re attempting to decode from an RTSP stream coming from an IP (PoE) camera.
These GStreamer gst-launch pipelines definitely work with the IP camera on Jetson hardware, but only use CPU decoding – but they have let us experiment with basic pipelines, to validate that we can get data from an IP camera.
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=300 ! rtph264depay ! avdec_h264 ! xvimagesink -e
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=300 ! rtph264depay ! avdec_h264 ! autovideosink
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! queue ! h264parse ! omxh264dec ! nvvidconv ! nv3dsink
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! queue ! h264parse ! omxh264dec ! nveglglessink -e
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! queue ! h264parse ! omxh264dec ! autovideosink
Sidenote – this next pipeline – which may use hardware acceleration (I’m still a bit fuzzy on what omxh264dec actually does), seems to error out every few frames due to a lower latency value (300), compared to the above 1000 – we’d love to better understand why this is, if there is any insight you could provide.
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=300 ! rtph264depay ! queue ! h264parse ! omxh264dec ! nveglglessink -e
WARNING: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0:
There may be a timestamping problem, or this computer is too slow.
Finally, these gst-launch pipelines, which should be leveraging NVIDIA hardware to do GPU-accelerated frame decoding/conversation (following guidance in the Accelerated GStreamer User Guide), unfortunately do not work on our Jetson hardware – and we’re trying to understand why/what we may be doing incorrectly here:
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nv3dsink -e
- Results in: WARNING: erroneous pipeline: could not link rtph264depay0 to qtdemux0
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nv3dsink -e
- Results in: (gst-launch-1.0:16180): GStreamer-CRITICAL **: 17:08:19.337: gst_mini_object_unref: assertion 'mini_object != NULL' failed
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! nv3dsink
- Results in: (gst-launch-1.0:16180): GStreamer-CRITICAL **: 17:08:19.337: gst_mini_object_unref: assertion 'mini_object != NULL' failed
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! autovideosink
- Results in: (gst-launch-1.0:16180): GStreamer-CRITICAL **: 17:08:19.337: gst_mini_object_unref: assertion 'mini_object != NULL' failed
- gst-launch-1.0 rtspsrc location=rtsp://192.168.0.103:554/s1 latency=1000 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nveglglessink -e
- Results in: "WARNING: erroneous pipeline: could not link nvv4l2decoder0 to eglglessink0"
Is there anything obvious we’re doing wrong, particularly with nvv412decoder?