Hardware platform: Jetson Xavier NX
I’m using the gst-launch-1.0 utility to construct a simple pipeline that behaves like the deepstream-app example.
This pipeline converts the stream from a v4l2 camera, runs it through an nvstreammux, and displays it:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080 ! videoconvert ! nvvideoconvert ! 'video/x-raw(memory:NVMM)' ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvdsosd ! nvegltransform ! nveglglessink
For some reason, this pipeline is very slow. It runs at < 1 fps, and I get these errors:
WARNING: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: A lot of buffers are being dropped. Additional debug info: gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: There may be a timestamping problem, or this computer is too slow.
If I simply remove
nvstreammux and run the resulting pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, width=1920, height=1080 ! videoconvert ! nvvideoconvert ! 'video/x-raw(memory:NVMM)' ! nvdsosd ! nvegltransform ! nveglglessink
It runs at ~10fps, which is still very slow, but much better.
It seems like nvstreammux is causing most of the slowdown, but I’m guessing this is because I’m using it incorrectly.
videoconvert ! nvvideoconvert part doesn’t make any sense to me, but I couldn’t get the pipeline to work without it. I copied the sequence from the Python deepstream example that uses a usb camera as input.
- What is the proper way to use a v4l2 camera stream with
- Is the
videoconvert ! nvvideoconvertsequence necessary? This feels wrong.
- Is there a difference between the
nvvidconvplugins? They print different results in
gst-inspect-1.0and act differently, but I don’t know when I should use one or the other.