I’m working with an Orin Dev Kit running JetPack 5.0.2.
If I launch the l4t base container:
docker run --rm -it --runtime nvidia -v /tmp/argus_socket:/tmp/argus_socket nvcr.io/nvidia/l4t-base:r35.1.0
and run this simple pipeline:
gst-launch-1.0 nvarguscamerasrc silent=false num-buffers=50 ! nvvidconv compute-hw=GPU ! "video/x-raw" ! fakesink
Then it stalls after ~10 frames. However, if I run it with the
compute-hw argument to
VIC it runs without any problems. If I run it on the host Orin, the same pipeline works fine with either VIC or GPU.
How can I get this working inside a container? Maybe something needs to be added to
Not sure why you want to use GPU for this case. It makes no transform, it just copies from NVMM into system memory. My understanding is that GPU option is more intended for format converting/resizing/rescaling/cropping… between buffers being both in NVMM memory.
Playing with that from or to system memory, I’ve seen some cases not working, some working after a few seconds of weirds results, but after issuing a more correct pipeline, then the previous pipelines were working fine…
Also, it seems to me that outputting into I420 format may better work.
If this is only docker specific, may be you would need some additional binding to X for CUDA call from docker.
@Honey_Patouceul I agree in this case it doesn’t make much sense, but this is just the simplest reproducible test case I could come up with. In practice I am running a pipeline which is hitting the VIC hard, and offloading to the GPU makes a noticeable difference.
Is it working better if using nvviconv with VIC for copying to system memory, and just using GPU for your heavy processing ?
gst-launch-1.0 nvarguscamerasrc silent=false num-buffers=50 ! nvvidconv compute-hw=GPU ! "video/x-raw(memory:NVMM)" ! nvvidconv compute-hw=VIC ! "video/x-raw" ! fakesink -v
gst-launch-1.0 nvarguscamerasrc silent=false num-buffers=50 ! nvvidconv compute-hw=GPU ! "video/x-raw,format=I420" ! fakesink -v