Updating Tensorrt using an image

We use a DeepStream-l4t image to run our software in a container on Jetson Nano. From documentation: “CUDA, TensorRT and VisionWorks are ready to use within the Deepstream container as they are made available from the host by the NVIDIA Container Runtime”. So we don’t have these apps in the container. But when we supply our devices (nano) with our software to the clients, these clients will not be able update the software on the device’s host, they will update only our images. But they don’t contain CUDA and Tensorrt, so they will not update. Am I right? Updating Tensorrt and CUDA are very important. Should I build an image from scratch with these apps? Or there is an another way?

Hi,

L4t docker mount library from the host.
So it’s required to update the Nano library itself for the libraries upgrade in the docker.

Maybe our OTA tool might help?
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fquick_start.html%23wwpID0E0LB0HA

Thanks.