Missing Header In Docker Image deepstream-l4t:6.0-* and 6.0.1-*

• Jetson Nano
**• Deepstream 6.0 **
• JetPack Version 4.6
• Missing Header
• Write a Dockerfile based on deepstream-l4t:6.0 and make the lib nvdsinfer_custom_impl_Yolo
**• nvdsinfer_custom_impl_Yolo and docker image deepstream 6.0 **

Hello,

I found an issue when I build a docker image with the version 6.0.- ( I don’t have the issue with docker image deepstream 7.0 and deepstream 6.0 in local on my jetson nano)

during the build I have a error of a missing header during the build of nvdsinfer_custom_impl_Yolo ( GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models )

here the output :

ARG DEEPSTREAM_TAG=6.0-samples

FROM --platform=linux/arm64/v8 nvcr.io/nvidia/deepstream-l4t:${DEEPSTREAM_TAG}

# Install dependencies and clean up in a single RUN command to reduce image layers
RUN apt-get update && apt-get install -y --no-install-recommends \
    software-properties-common \
    && add-apt-repository ppa:ubuntu-toolchain-r/test \
    && apt-get install -y --no-install-recommends \
    cmake \
    build-essential \
    g++-11 \
    libssl-dev \
    gstreamer1.0-tools \
    gstreamer1.0-plugins-base \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-ugly \
    libgstreamer1.0-dev \
    libgstreamer-plugins-base1.0-dev \
    && rm -rf /var/lib/apt/lists/*

# Set working directory
WORKDIR /app

COPY ./vision/ .


ARG CUDA_VER=10.2

ENV CUDA_VER=${CUDA_VER}


# Build custom libs
RUN make -C  ./models/custom_lib/nvdsinfer_custom_impl_Yolo clean

RUN make -C  ./models/custom_lib/nvdsinfer_custom_impl_Yolo

Here the error output :

 > [vision 12/14] RUN make -C  ./models/custom_lib/nvdsinfer_custom_impl_Yolo:
0.294 make: Entering directory '/app/models/custom_lib/nvdsinfer_custom_impl_Yolo'
0.294 g++ -c  -o utils.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-10.2/include utils.cpp
0.552 In file included from utils.cpp:26:0:
0.552 utils.h:36:10: fatal error: NvInfer.h: No such file or directory
0.552  #include "NvInfer.h"
0.552           ^~~~~~~~~~~
0.552 compilation terminated.
0.559 Makefile:81: recipe for target 'utils.o' failed
0.560 make: *** [utils.o] Error 1
0.560 make: Leaving directory '/app/models/custom_lib/nvdsinfer_custom_impl_Yolo'
------
failed to solve: process "/bin/sh -c make -C  ./models/custom_lib/nvdsinfer_custom_impl_Yolo" did not complete successfully: exit code: 2

for additionnal information, here the makefile of nvdsinfer_custom_impl_Yolo :

CUDA_VER?=
ifeq ($(CUDA_VER),)
	$(error "CUDA_VER is not set")
endif

OPENCV?=
ifeq ($(OPENCV),)
	OPENCV=0
endif

GRAPH?=
ifeq ($(GRAPH),)
	GRAPH=0
endif

CC:= g++
NVCC:=/usr/local/cuda-$(CUDA_VER)/bin/nvcc

CFLAGS:= -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations
CFLAGS+= -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-$(CUDA_VER)/include

ifeq ($(OPENCV), 1)
	COMMON+= -DOPENCV
	CFLAGS+= $(shell pkg-config --cflags opencv4 2> /dev/null || pkg-config --cflags opencv)
	LIBS+= $(shell pkg-config --libs opencv4 2> /dev/null || pkg-config --libs opencv)
endif

ifeq ($(GRAPH), 1)
	COMMON+= -DGRAPH
endif

CUFLAGS:= -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-$(CUDA_VER)/include

LIBS+= -lnvinfer_plugin -lnvinfer -lnvparsers -lnvonnxparser -L/usr/local/cuda-$(CUDA_VER)/lib64 -lcudart -lcublas -lstdc++fs
LFLAGS:= -shared -Wl,--start-group $(LIBS) -Wl,--end-group

INCS:= $(wildcard *.h)

SRCFILES:= $(filter-out calibrator.cpp, $(wildcard *.cpp))

ifeq ($(OPENCV), 1)
	SRCFILES+= calibrator.cpp
endif

SRCFILES+= $(wildcard layers/*.cpp)
SRCFILES+= $(wildcard *.cu)

TARGET_LIB:= libnvdsinfer_custom_impl_Yolo.so

TARGET_OBJS:= $(SRCFILES:.cpp=.o)
TARGET_OBJS:= $(TARGET_OBJS:.cu=.o)

all: $(TARGET_LIB)

%.o: %.cpp $(INCS) Makefile
	$(CC) -c $(COMMON) -o $@ $(CFLAGS) $<

%.o: %.cu $(INCS) Makefile
	$(NVCC) -c -o $@ --compiler-options '-fPIC' $(CUFLAGS) $<

$(TARGET_LIB) : $(TARGET_OBJS)
	$(CC) -o $@  $(TARGET_OBJS) $(LFLAGS)

clean:
	rm -rf $(TARGET_LIB)
	rm -rf $(TARGET_OBJS)

Thanks for any help !

This image is for deployment or demonstration purposes only, So it means there are no header file or some toolkit such as nvcc.

If you want to use it for development, please use nvcr.io/nvidia/deepstream-l4t:6.0-triton.

Here my output using 6.0.1-triton :

root@0ea75f83b893:/app/models/custom_lib/nvdsinfer_custom_impl_Yolo# make clean
rm -rf libnvdsinfer_custom_impl_Yolo.so
rm -rf utils.o yolo.o nvdsinfer_yolo_engine.o nvdsinitinputlayers_Yolo.o nvdsparsebbox_Yolo.o yoloPlugins.o layers/convolutional_layer.o layers/route_layer.o layers/shortcut_layer.o layers/slice_layer.o layers/pooling_layer.o layers/reorg_layer.o layers/batchnorm_layer.o layers/activation_layer.o layers/channels_layer.o layers/sam_layer.o layers/deconvolutional_layer.o layers/upsample_layer.o layers/implicit_layer.o yoloForward_v2.o yoloForward.o yoloForward_nc.o nvdsparsebbox_Yolo_cuda.o
root@0ea75f83b893:/app/models/custom_lib/nvdsinfer_custom_impl_Yolo# make
g++ -c  -o utils.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-10.2/include utils.cpp
In file included from utils.cpp:26:0:
utils.h:36:10: fatal error: NvInfer.h: No such file or directory
 #include "NvInfer.h"
          ^~~~~~~~~~~
compilation terminated.
Makefile:81: recipe for target 'utils.o' failed
make: *** [utils.o] Error 1
root@0ea75f83b893:/app/models/custom_lib/nvdsinfer_custom_impl_Yolo# find / -name "NvInfer.h"
root@0ea75f83b893:/app/models/custom_lib/nvdsinfer_custom_impl_Yolo# 

NvInfer.h does not exist

How to you start docker ? Can you share the command line ?
Try the following command line.

docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:6.0-triton

Some libraries are shared between docker and host on the jetson platform.Nvinfer.h is header of tensorrt, so make sure tensorrt is installed properly in host.

tensorrt is correctly installed on host and the library build well on it, here my docker start command :

 vision:
    build:
      context: .
      dockerfile: ./vision/Dockerfile
      args:
          CUDA_VER: 10.2    
          DEEPSTREAM_TAG: 6.0-triton
    container_name: vision
    image: bedrock/vision
    runtime: nvidia
    network_mode: host
    volumes:
      - /tmp/.X11-unix:/tmp/.X11-unix
      - ./data:/data
      - ./vision/cfg:/app/cfg
    devices:
      - /dev/video0
    environment:
      - DISPLAY=${DISPLAY}
      - TZ=Europe/Berlin
      - CUDA_VER=10.2
    working_dir: /app
    command: ./build/vision ./cfg/pipeline_cfg/pipeline_main_jpeg_fps.yaml
    restart: always

I don’t use startup command of course for debugging

I have the feeling that the header of tensorrT are not installed in 6.0-triton, the header are not in /usr/include/aarch64-linux-gnu/NvInfer.h like on host.

I would like to build without hosting the build on a local nano (via a github worflows for example) like I do on deepstream-l4t:7.0*.

how can I install tensorRT on the docker image (I don’t really understand why a image for dev don’t have tensorRT) ?

If you use the command above to test, does it work? If it does, please compare the difference with the configuration file

Do you mean ds-6.0? Jetson can only support ds-6.0 at most.

This is unnecessary.

As I mentioned above, these libraries and headers on Jetson are shared between the host and docker.

Considering the size of docker images, these libraries are not installed in the docker image.

No, I meant it; my build works on Jetson Orin and x86 with DeepStream 7.0 without any problem. I’m working on both Jetson Orin and Nano.

It shares resources, but at runtime, I want to build the library during the Docker build. Additionally, the path you mounted is not the path where the TensorRT library like nvinfer is located.

Your command doesn’t make any difference; that’s normal because the missing libraries come from /usr/lib/aarch64-linux-gnuand /usr/include/aarch64-linux-gnu/ on host

Sorry for the long delay because a holiday.

TensorRT is only mapped to the container at runtime.

If you want to include Nvinfer.h when building the image, you can add the COPY instruction in the Dockerfile to copy Nvinfer.h from the host to the docker image.

the only solution I found without using self host was to push the nano *.so prebuilt for it to work