TensorFlow not found in Docker Container after Installation of L4t-TensorRT

As the title says, I am trying to install l4t tensort as follows in my Dockerfile:

FROM nvcr.io/nvidia/l4t-base:35.4.1
FROM nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel
FROM nvcr.io/nvidia/l4t-cuda:11.4.19-devel

This is what my environment paths look like:
ENV CUDA_HOME=“/usr/local/cuda-11.4”
ENV PATH=“/usr/local/cuda-11.4/bin:${PATH}”
ENV LD_LIBRARY_PATH=“/usr/local/cuda-11.4/lib64:${LD_LIBRARY_PATH}”
ENV CPATH=“/usr/local/cuda-11.4/include:${CPATH}”

However, when I try to python3 import tensorrt, I get module not found.

I tried following this forum answer: Unable to use TensorRT inside the L4T-Tensorflow container

But am still running into issues. Outside of the docker container tensorrt works fine.

Hi,

Python TensorRT is preinstalled in the nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel.
Please try it again.

$ sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel

==========
== CUDA ==
==========

CUDA Version 11.4.19

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

root@tegra-ubuntu:/# python3
Python 3.8.10 (default, May 26 2023, 14:05:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt as trt
>>> trt.__version__
'8.5.2.2'
>>>

Thanks

Here is my Dockerfile that I build using the command docker-compose build:

this is what the docker-compose.yml file consists of:
inferserver:
build: ./inferserver
ports:
- “17000:7000”
environment:
- PWD=${PWD}
volumes:
- /home/nvidia/inferserver:/my
runtime: nvidia

Dockerfile.txt (3.7 KB)

I then call docker-compose up to start up my image

When I bash into the shell I try exactly the same commands mentioned above:
nvidia@ubuntu:~/$ docker exec -it 0b3 bash
root@0b30e81db5dd:/my/code# python3
Python 3.8.10 (default, Nov 22 2023, 10:22:35)
[GCC 9.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import tensorrt as trt
Traceback (most recent call last):
File “”, line 1, in
ModuleNotFoundError: No module named ‘tensorrt’

When I try your command it works as intended. Just confused why it would not work with the way im doing it if Im using the commands properly.

Hi,

Could you change below

FROM nvcr.io/nvidia/l4t-base:35.4.1
FROM nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel
FROM nvcr.io/nvidia/l4t-cuda:11.4.19-devel

Into

FROM nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel

And try it again?
The image might built on top of l4t-cuda which doesn’t include the TensorRT library.
Thanks.

Running into storage issues now unfortunately lol.

When I create the ‘nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel’ by itself as an image, it successfully builds.

But when I try to create as part of a dockerfile I end up running out of storage.

As a follow up - why is it that:

nvidia@ubuntu:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nvcr.io/nvidia/l4t-tensorrt r8.5.2.2-devel 7eb260df495e 8 months ago 9.56GB

It is shown as 9.56GB here but 4.7GB here: NVIDIA L4T TensorRT | NVIDIA NGC. Why is that?

Hi,

We have confirmed that the image is really 9.x GiB.

$ sudo docker image ls
REPOSITORY                         TAG                    IMAGE ID       CREATED         SIZE
...
nvcr.io/nvidia/l4t-tensorrt        r8.5.2.2-devel         7eb260df495e   8 months ago    9.56GB
nvcr.io/nvidia/l4t-cuda            11.4.19-devel          cff899738cad   11 months ago   4.85GB

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.