Hi is there an official nvidia arm docker container with the same spec like the nvidia jetson ubuntu os ?
i mean with cuda cuddn gstreamer python and opencv pre compiled ?
I am able to compile everything for my self… but why ?
Best regards
Hi is there an official nvidia arm docker container with the same spec like the nvidia jetson ubuntu os ?
i mean with cuda cuddn gstreamer python and opencv pre compiled ?
I am able to compile everything for my self… but why ?
Best regards
Hi @m.fischer, the l4t-base
container has CUDA/cuDNN/TensorRT/GStreamer in it. The version of OpenCV from JetPack is in the latest l4t-ml
container.
okay interesting everything works native like a charm.
with the l4t-base
container I have to move opencv to complie darknet
cublas is also missing
so on JetPack 4.5 every path is set perfekty, but sadly not on the container itself
Best regards Martin
Hi @m.fischer, are you building your container from Dockerfile? If so, set your default docker daemon to nvidia, and --runtime nvidia
will get used during docker build
operations (which will make the mounted CUDA/cuDNN files available during build-time)
https://github.com/dusty-nv/jetson-containers#docker-default-runtime
yes i am building directly on a jetson xavier with a fresh setup of Jetpack 4.5
this is my problem-> cublas for 10.1 is missing - #18 by phillip3m
but not nativ on the xavier!
FROM nvcr.io/nvidia/l4t-ml:r32.5.0-py3
#FROM nvcr.io/nvidia/l4t-base:r32.4.4
RUN apt-get update && apt-get install -y jq
wget
pkg-config
git
RUN ln -s /usr/include/opencv4/opencv2/ /usr/include/opencv2
ENV PATH=$PATH:/usr/local/cuda-10.2/bin
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH/usr/local/cuda-10.2/lib64
RUN cp /usr/lib/aarch64-linux-gnu/libcublas.so /usr/local/cuda-10.2/lib64/libcublas.so &&
cp /usr/lib/aarch64-linux-gnu/libcublas.so.10 /usr/local/cuda-10.2/lib64/libcublas.so.10 &&
cp /usr/lib/aarch64-linux-gnu/libcublasLt.so.10 /usr/local/cuda-10.2/lib64/libcublasLt.so.10
#Start Darknet Install
RUN git clone GitHub - AlexeyAB/darknet: YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet ) /app
WORKDIR /app
COPY Makefile Makefile
RUN make
#Install all the required packages for the python script
RUN pip3 install --upgrade pip
COPY launch.sh launch.sh
RUN chmod 777 launch.sh
CMD [“./launch.sh”]
OK, to use the CUDA/cuDNN libraries/headers during docker build
you probably need to set the default-runtime to nvidia, and reboot your system or restart your docker daemon.