Very excited about the container runtime as it seems to solve some pretty tricky problems relating to image portability.
That being said, is it possible to install the container runtime on devices running L4T 28.2? I have a number of devices in production remotely that I can’t easily flash with the new SDKManager.
For some background, I have a container image built loosely following this dockerfile https://github.com/open-horizon/cogwerx-jetson-tx2/blob/master/tensorrt/Dockerfile.tensorrt3.0-CUDA9. To summarize, the image actually includes the libraries (CUDA, cuDNN, TensorRT) that shipped with L4T 28.2, rendering it non-portable as I recently learned when I tried running it on a freshly flashed TX2 with L4T 32.2, leading me to NCR, leading me here asking this. If I can install NCR on the devices running 28.2, I should be able to build a single, portable image that relies on NCR which then will port to either L4T.
I’m looking for some input as to whether this is possible, and maybe some ideas for how to do it. My first thought was to use the deb files for NCR downloaded by the SDK manager, but I suspect at minimum the files that end up in /etc/nvidia-container-runtime/host-files-for-container.d/ will need tweaking for 28.2 as opposed to 32.2.
Any input is greatly appreciated.