hello, i have a new jetson AGX board that i only install Jetpack but wihthout cuda,and some computer vision library. I saw that the L4T new jetpack will include all of those library so I wonder if i can run my application (use cuda, cudnn, tensorrt, deepstream) in the L4T jetpack docker image without need to install library from sdk in host devices? For now, my docker reported that it cannot find the nvv4l2decode inside the docker
Hi,
For JetPack 4 (r32), container mounts library from Jetson native.
So you will need to install all the components first.
Thanks.
Hi @AastaLLL , thank for fast reply. But i’m not use the R32. I just R35, Jetpack 5
basically, with R35 i though all components was packaged in Jetpack docker. But as you can see in the first image, for the Jtop report result about my system.
Here is the result when i start and gstream pipleline to read an local video. it was reported that no nvv4l2decoder
@AastaLLL , i still can found the nvv4l2decode
lib in the container.
and the native environment.
- My
/etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv
look like this. So i think the library was loaded in the docker environment. But some how it still reported that nonvv4l2decode
.
@AastaLLL any thoughts on this ?
Hi,
Sorry for the missing since here is the Nano board which can only use JetPack 4.
On JetPack 5, you can run the container without installing the components.
But please launch it with --runtime nvidia
to allow the container access to some required system library.
Have you tried the Deepstream container to see if the nvv4l2decode can work?
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.