Hi I am creating jetson docker with base image as nvcr.io/nvidia/l4t-base:r32.2.1
I am not able to use tensorrt package in it due to libnvdla_compiler.so error. I have written following dockerfile
Dockerfile
FROM nvcr.io/nvidia/l4t-base:r32.2.1
RUN apt-get update
RUN apt-get install -y git python3-pip cmake protobuf-compiler libprotoc-dev libopenblas-dev gfortran libjpeg8-dev libxslt1-dev libfreetype6-dev ifmetric python3-pip libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev python3-matplotlib tesseract-ocr aziot-edge defender-iot-micro-agent-edge python3-pip libopenblas-base libopenmpi-dev libomp-dev
RUN pip3 install -U protobuf
RUN pip3 install Cython
RUN pip3 install filterpy==1.4.5
RUN pip3 install azure.iot.device==2.11.0
RUN pip3 install imutils==0.5.4
RUN pip3 install numpy==1.19.4
RUN pip3 install onnx==1.9.0
RUN export PATH=/usr/local/cuda/bin:${PATH}
RUN export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH}
WORKDIR /Traffic_count
RUN cd ./yolo/
RUN bash ./install_pycuda.sh
RUN cd /Traffic_count/plugins/
RUN make
RUN /Traffic_count/yolo
RUN python3 yolo_to_onnx.py -m yolov4-tiny-custom
RUN python3 onnx_to_tensorrt.py -m yolov4-tiny-custom
RUN cd /Traffic_count
CMD [ “python3”, “inout_edge_v2.py”,“-m”,“yolov4-tiny-custom”]
i have attached the image of error please tell me how can install tensorrt in this dockerfile or how can I export the cuda path variable as it should be already installed.