How to install TensorRT on top of a L4T base image

I have a onnx file, which i am able to run on my x84_64 Laptop without problems.
I need to deploy this model on a NVIDIA Jetson TX2, however I only have 2.7 GB of storage on this device. Therefore, installing the TensorRT L4T image is not possible (too big).

Is there any tutorial on how to create a (super) lightweight TensorRT installation on top of the l4t base image with multistage build? Or is it possible to copy cross compiled binaries of TensorRT for an aarch64 architecture into a docker container and use this very container to infer on the TX2?

Thanks in advance!

Hi,

Please note that TensorRT depends on the cuDNN and CUDA library.
You will need to install them as well.

You can try to install it with the below OTA command.
It will only install the required libraries instead of all the JetPack components.

$ sudo apt update
$ sudo apt install nvidia-tensorrt

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.