Thank you for getting these containers to build. I’ll give the ROS Noetic + DeepStream container a try with my code…
Would you send / upload the Dockerfiles also? I have some additional dependencies (DeepStream python bindings) that I need to install for my purposes. Specifically, I had to add (to the dustynv/ros:humble-ros-base-deepstream-l4t-r35.1.0 container):
mkdir -p ~/ros2_ws/src
cd ~/ros2_ws/src
git clone GitHub - NVIDIA-AI-IOT/ros2_deepstream: ROS 2 package for NVIDIA DeepStream applications on Jetson Platforms
apt-get update
apt install -y git python-dev python3 python3-pip python3.8-dev cmake g++ build-essential libglib2.0-dev libglib2.0-dev-bin python-gi-dev libtool m4 autoconf automake
apt install -y libcairo2-dev pkg-config python3-dev
rm /usr/bin/pip3
apt reinstall python3-pip
pip3 install --upgrade pip
pip install pycairo
cd “/opt/nvidia/deepstream/deepstream/sources/apps/”
git clone GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications
cd “/opt/nvidia/deepstream/deepstream/sources/apps/deepstream_python_apps/”
git submodule update --init
apt-get install --reinstall ca-certificates
apt install -y python3-gi python3-dev python3-gst-1.0 python-gi-dev git python-dev python3 python3-pip python3.8-dev cmake g++ build-essential libglib2.0-dev libglib2.0-dev-bin libg>
cd “/opt/nvidia/deepstream/deepstream/sources/apps/deepstream_python_apps/3rdparty/gst-python/”
./autogen.sh
make && make install
cd “/opt/nvidia/deepstream/deepstream/sources/apps/deepstream_python_apps/bindings”
mkdir build
cd “/opt/nvidia/deepstream/deepstream/sources/apps/deepstream_python_apps/bindings/build”
NOTE: Download the appropriate pyds file from here:
In my case, it is: pyds-1.1.4-py3-none-linux_aarch64.whl
pip3 install pyds-1.1.4-py3-none-linux_aarch64.whl
cd “/opt/nvidia/deepstream/deepstream/sources/apps/deepstream_python_apps/”
mv apps/* ./
Then, to get the ros2_deepstream example to work:
Modify ~/ros2_ws/src/ros2_deepstream/single_stream_pkg/single_stream_pkg/single_stream_class.py, changing all references to “Classification2D” to “Classification”, and adding sys.path.insert(0, ‘/opt/nvidia/deepstream/deepstream/sources/apps/deepstream_python_apps’) before imports from “common” module.
I ran into an additional snag when I try to run my own code in your dustynv/ros:noetic-ros-base-deepstream-l4t-r35.1.0 container…
I also need to install the custom bounding box parser from here:
I’ve successfully installed and used this parser in another container (built by a coworker - the Dockerfile is attached), which uses nvcr.io/nvidia/deepstream-l4t:6.1.1-triton as its base.
However, when I try to install into dustynv/ros:noetic-ros-base-deepstream-l4t-r35.1.0 I get this error:
root@ubuntu:/opt/DeepStream-Yolo# CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
make: Entering directory ‘/opt/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo’
g++ -c -o utils.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-11.4/include utils.cpp
In file included from utils.cpp:26:
utils.h:36:10: fatal error: NvInfer.h: No such file or directory
36 | #include “NvInfer.h”
| ^~~~~~~~~~~
compilation terminated.
make: *** [Makefile:70: utils.o] Error 1
make: Leaving directory ‘/opt/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo’
I believe I’ve seen this error before, but am not sure what the solution is. Any idea on this?
Dockerfile (3.0 KB)
It looks like the deepstream-l4t container doesn’t have the TensorRT development headers installed - can you install libnvinfer-dev
in the container from apt?
BTW this is my modified Dockerfile.ros.noetic (with only that one -DSETUPTOOLS_DEB_LAYOUT=OFF
change and the BASE_IMAGE
change)
#
# this dockerfile roughly follows the 'Installing from source' from:
# http://wiki.ros.org/noetic/Installation/Source
#
ARG BASE_IMAGE=nvcr.io/nvidia/deepstream-l4t:6.1.1-samples
FROM ${BASE_IMAGE}
ARG ROS_PKG=ros_base
ENV ROS_DISTRO=noetic
ENV ROS_ROOT=/opt/ros/${ROS_DISTRO}
ENV ROS_PYTHON_VERSION=3
ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /workspace
#
# add the ROS deb repo to the apt sources list
#
RUN apt-get update && \
apt-get install -y --no-install-recommends \
git \
cmake \
build-essential \
curl \
wget \
gnupg2 \
lsb-release \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
RUN curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | apt-key add -
#
# install bootstrap dependencies
#
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libpython3-dev \
python3-rosdep \
python3-rosinstall-generator \
python3-vcstool \
build-essential && \
rosdep init && \
rosdep update && \
rm -rf /var/lib/apt/lists/*
#
# download/build the ROS source
#
RUN mkdir ros_catkin_ws && \
cd ros_catkin_ws && \
rosinstall_generator ${ROS_PKG} vision_msgs --rosdistro ${ROS_DISTRO} --deps --tar > ${ROS_DISTRO}-${ROS_PKG}.rosinstall && \
mkdir src && \
vcs import --input ${ROS_DISTRO}-${ROS_PKG}.rosinstall ./src && \
apt-get update && \
rosdep install --from-paths ./src --ignore-packages-from-source --rosdistro ${ROS_DISTRO} --skip-keys python3-pykdl -y && \
python3 ./src/catkin/bin/catkin_make_isolated --install --install-space ${ROS_ROOT} -DCMAKE_BUILD_TYPE=Release -DSETUPTOOLS_DEB_LAYOUT=OFF && \
rm -rf /var/lib/apt/lists/*
#
# setup entrypoint
#
COPY ./packages/ros_entrypoint.sh /ros_entrypoint.sh
RUN echo 'source /opt/ros/${ROS_DISTRO}/setup.bash' >> /root/.bashrc
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
WORKDIR /
OK, did apt install libnvinfer-dev, which looked to run through fine.
However, now when I run CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo I get this error:
root@ubuntu:/opt/DeepStream-Yolo# CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
make: Entering directory ‘/opt/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo’
g++ -c -o utils.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-11.4/include utils.cpp
In file included from /usr/include/aarch64-linux-gnu/NvInferLegacyDims.h:16,
from /usr/include/aarch64-linux-gnu/NvInfer.h:16,
from utils.h:36,
from utils.cpp:26:
/usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h:19:10: fatal error: cuda_runtime_api.h: No such file or directory
19 | include <cuda_runtime_api.h>
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:70: utils.o] Error 1
make: Leaving directory ‘/opt/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo’
I found your thread here, which mentions this issue:
With your help, I went through my source code carefully. I found that part of the TensorRT sample common file was used in my source code, which led me to mistakenly think that the container did not contain CUDA and TensorRT packages. I did the following:
docker run -it -v MY_WORKSPACE:/home dustynv/ros:galactic-ros-base-l4t-r34.1.1 /bin/bash
docker cp /usr/src/tensorrt/samples/common MY_CONTAINER_ID:/usr/src/tensorrt/samples/
When I compiled using colcon toolkit, several outputs like the foll…
But when I search for “cuda_runtime_api.h” it’s not present in the container:
root@ubuntu:/usr/local# ls -la
total 48
drwxr-xr-x 1 root root 4096 Jul 22 18:27 .
drwxr-xr-x 1 root root 4096 Jul 20 11:03 …
drwxr-xr-x 1 root root 4096 Jul 22 20:10 bin
lrwxrwxrwx 1 root root 22 Jul 22 18:27 cuda → /etc/alternatives/cuda
lrwxrwxrwx 1 root root 25 Jul 22 18:27 cuda-11 → /etc/alternatives/cuda-11
drwxr-xr-x 1 root root 4096 Jul 22 18:30 cuda-11.4
drwxr-xr-x 2 root root 4096 May 31 15:55 etc
drwxr-xr-x 2 root root 4096 May 31 15:55 games
drwxr-xr-x 1 root root 4096 Aug 23 01:54 include
drwxr-xr-x 1 root root 4096 Aug 23 01:54 lib
lrwxrwxrwx 1 root root 9 May 31 15:55 man → share/man
drwxr-xr-x 2 root root 4096 May 31 16:12 sbin
drwxr-xr-x 1 root root 4096 Oct 3 16:13 share
drwxr-xr-x 2 root root 4096 May 31 15:55 src
(reverse-i-search)`': ^C
root@ubuntu:/usr/local# find . -name cuda_runtime_api.h
root@ubuntu:/usr/local#
So I tried adding the command from further up in the current thread, which installs the full CUDA Toolkit:
RUN apt-get update &&
apt-get install -y --no-install-recommends
cuda-toolkit-11-4
&& rm -rf /var/lib/apt/lists/*
&& apt-get clean
This runs through fine, but now the CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo command causes this error:
Step 42/47 : RUN cd DeepStream-Yolo && CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
—> Running in c27e1c013039
make: Entering directory ‘/opt/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo’
g++ -c -o utils.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-11.4/include utils.cpp
g++ -c -o nvdsinfer_yolo_engine.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-11.4/include nvdsinfer_yolo_engine.cpp
In file included from nvdsinfer_yolo_engine.cpp:26:
/opt/nvidia/deepstream/deepstream/sources/includes/nvdsinfer_custom_impl.h:126:10: fatal error: NvCaffeParser.h: No such file or directory
126 | include “NvCaffeParser.h”
| ^~~~~~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:70: nvdsinfer_yolo_engine.o] Error 1
make: Leaving directory ‘/opt/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo’
The command ‘/bin/sh -c cd DeepStream-Yolo && CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo’ returned a non-zero code: 2
Attempting to use nvcr.io/nvidia/deepstream-l4t:6.1.1-triton as the base image to solve these issues…
Cool, thanks for posting the Dockerfile.
Hi @coreyslick , try installing the libnvinfer-plugin-dev
package as well.
Dockerfile.ros.noetic (4.3 KB)
It ended up working using nvcr.io/nvidia/deepstream-l4t:6.1.1-triton as the base image. The Dockerfile is attached.
I really appreciate all of your work getting these containers to build - thank you!
No problem at all @coreyslick , glad that you were able to get it working!
system
Closed
October 19, 2022, 3:17pm
32
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.