L4t-tensorrt:r8.0.1-runtime: fatal error: NvInfer.h: No such file or directory

Hi, I’m using this docker image: nvcr.io/nvidia/l4t-tensorrt:r8.0.1-runtime
Try to build tensorrt samples in the docker, but met this problem

1. Run the docker
2. docker cp /usr/src/tensorrt/samples to the docker
3. cd samples and make

This is the result

root@up2-desktop:/tensorrt/samples# make
make[1]: Entering directory '/tensorrt/samples/sampleAlgorithmSelector'
../Makefile.config:11: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
../Makefile.config:16: CUDNN_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
../Makefile.config:29: TRT_LIB_DIR is not specified, searching ../../lib, ../../lib, ../lib by default, use TRT_LIB_DIR=<trt_lib_directory> to change.
if [ ! -d ../../bin/dchobj/sampleAlgorithmSelector/sampleAlgorithmSelector ]; then mkdir -p ../../bin/dchobj/sampleAlgorithmSelector/sampleAlgorithmSelector; fi
if [ ! -d ../../bin/chobj/sampleAlgorithmSelector/sampleAlgorithmSelector/../common ]; then mkdir -p ../../bin/dchobj/sampleAlgorithmSelector/sampleAlgorithmSelector/../common; fi; :
g++ -MM -MF ../../bin/dchobj/sampleAlgorithmSelector/sampleAlgorithmSelector/sampleAlgorithmSelector.d -MP -MT ../../bin/dchobj/sampleAlgorithmSelector/sampleAlgorithmSelector/sampleAlgorithmSelector.o -Wall -Wno-deprecated-declarations -std=c++14  -I"../common" -I"/usr/local/cuda/include" -I"/usr/local/cuda/include" -I"../include" -I"../../include" -I"../../parsers/onnxOpenSource" -D_REENTRANT sampleAlgorithmSelector.cpp
In file included from sampleAlgorithmSelector.cpp:28:0:
../common/buffers.h:19:10: fatal error: NvInfer.h: No such file or directory
 #include "NvInfer.h"
          ^~~~~~~~~~~
compilation terminated.
../Makefile.config:338: recipe for target '../../bin/dchobj/sampleAlgorithmSelector/sampleAlgorithmSelector/sampleAlgorithmSelector.o' failed
make[1]: *** [../../bin/dchobj/sampleAlgorithmSelector/sampleAlgorithmSelector/sampleAlgorithmSelector.o] Error 1
make[1]: Leaving directory '/tensorrt/samples/sampleAlgorithmSelector'
Makefile:75: recipe for target 'all' failed
make: *** [all] Error 2

I read about this page libnvidia-container/mount_plugins.md at jetson · NVIDIA/libnvidia-container · GitHub
so I check this tensorrt.csv, and see the NvInfer.h is in the CSV,
my problem is why does the docker not load this header?

# in the host
~ ls  /etc/nvidia-container-runtime/host-files-for-container.d/       
cuda.csv  cudnn.csv  l4t.csv  tensorrt.csv  visionworks.csv
➜  ~ cat  /etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv
lib, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvparsers.so.8.0.1
lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8.0.1
sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8
sym, /usr/lib/aarch64-linux-gnu/libnvinfer.so
sym, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
sym, /usr/lib/aarch64-linux-gnu/libnvparsers.so
sym, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so
lib, /usr/include/aarch64-linux-gnu/NvInfer.h   <-------this is it
lib, /usr/include/aarch64-linux-gnu/NvInferRuntime.h
lib, /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h
lib, /usr/include/aarch64-linux-gnu/NvInferVersion.h
lib, /usr/include/aarch64-linux-gnu/NvInferImpl.h
lib, /usr/include/aarch64-linux-gnu/NvInferLegacyDims.h
lib, /usr/include/aarch64-linux-gnu/NvUtils.h
lib, /usr/include/aarch64-linux-gnu/NvInferPlugin.h
lib, /usr/include/aarch64-linux-gnu/NvInferPluginUtils.h
lib, /usr/include/aarch64-linux-gnu/NvCaffeParser.h
lib, /usr/include/aarch64-linux-gnu/NvUffParser.h
lib, /usr/include/aarch64-linux-gnu/NvOnnxConfig.h
lib, /usr/include/aarch64-linux-gnu/NvOnnxParser.h
dir, /usr/lib/python3.6/dist-packages/tensorrt
dir, /usr/lib/python3.6/dist-packages/graphsurgeon
dir, /usr/lib/python3.6/dist-packages/uff
dir, /usr/lib/python3.6/dist-packages/onnx_graphsurgeon
dir, /usr/src/tensorrt

Also, I check in the docker, no any Nv*.h in the folder

root@up2-desktop:/tensorrt/samples# ls /usr/include/aarch64-linux-gnu/
a.out.h  asm  bits  c++  fpu_control.h  gnu  ieee754.h  sys

How can I load this plugin and header right in the docker?
Many thanks

Also I try to build the cudnn sample /usr/src/cudnn_samples_v8/mnistCUDNN/
And met this

root@up2-desktop:/cudnn_samples_v8/mnistCUDNN# make 
/bin/sh: 1: file: not found
CUDA_VERSION is 10020
Linking agains cublasLt = true
CUDA VERSION: 10020
TARGET ARCH: aarch64
HOST_ARCH: aarch64
TARGET OS: linux
SMS: 35 50 53 60 61 62 70 72 75 
test.c:1:10: fatal error: FreeImage.h: No such file or directory
 #include "FreeImage.h"
          ^~~~~~~~~~~~~
compilation terminated.
>>> WARNING - FreeImage is not set up correctly. Please ensure FreeImage is set up correctly. <<<
make: Nothing to be done for 'all'.

Hi,

Thanks for reporting this.

The header missing issue can also be reproduced in our environment.
We are checking this with our internal team. Will share more information with you later.

Thanks and sorry for all the inconvenience.

1 Like

Thanks

Hi,

We discuss this issue internally.

To save the image size, we currently only provide a runtime TensorRT container.
This indicates that you can deploy a TensorRT binary that doesn’t need to compile.

Thanks.