TensoRT Docker

Hi! I’m trying to use tensorrt c++ in docker on Jetson Nano. This is my dockerfile:

FROM nvcr.io/nvidia/l4t-tensorrt:r8.2.1-runtime

ENV DEBIAN_FRONTEND noninteractive

WORKDIR app/

ADD . .

RUN apt-get update && apt-get install -y \
    cmake \
    gcc clang \
    clang-tools \
    libopencv-dev \
    libmicrohttpd-dev \

RUN mkdir build && cd build && cmake .. && make

And I have this error when I build my app:

In file included from /app/main.cpp:4:
/app/src/TRTInference.h:8:10: fatal error: 'NvInfer.h' file not found
#include "NvInfer.h"

For newer TensorRT versions, there is a development version of the Docker container (e.g. r8.4.1.5-devel).
Maybe you’ll have more luck starting with the l4t-ml container?

Hi @vovinsa, on JetPack 4.x, you can set your default docker runtime to “nvidia”, and when you are building your dockerfile it will mount the TensorRT development headers from your host device (assuming you have TensorRT installed on your Nano)

https://github.com/dusty-nv/jetson-containers#docker-default-runtime

Then when you run your container, start it with --runtime nvidia

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.