The optimal pipeline is to pass video/x-raw(memory:NVMM) buffers from head to tail in the pipeline. However, nvjpedec is not designed for continuous decoding, so it is run as ‘nvjpegdec ! video/x-raw’.
Is it fair to say that, without applying the patch, there is no way to get this optimal pipeline while also using hardware JPEG decoding with Gstreamer on the Nano? Given that MJPEG is the norm for most USB cameras in meaningful resolutions, that really makes it very hard to say that USB cameras are usable at all in practical terms.
As to the patch, I just went through the whole process with a fresh install of JetPack 4.3 on a Nano dev kit (A02). I used the L4T 32.3.1 public sources and the oft-mentioned patch (nvjpegdec slower then jpegdec in gstreamer - #25 by DaneLLL). The patch broke the build, so I had to diagnose and get the nvbuf_utils header and lib included. When I then deployed as instructed, the pipeline that had worked previously then gives “Bus error (core dumped)”.
I’m not sure why, if this can be patched, that NVIDIA cannot just provide this as a working binary. I really didn’t need to waste hours finding and building all of that, just so I could get basic usability out of a USB camera, and STILL not actually get it working. Short of providing a binary, can you at least make it easy to find the code and provide working build instructions?