Building on the Deepstream image to include opencv, tensorflow, tf object detection library and tf_trt_models

Hi

I’m sure there’s an easy way to do this, but I’m flummoxed!

We have a need to create a docker image for the Xavier AGX which includes the following:

  • Deepstream
  • opencv (optimised)
  • Tensorflow
  • Tensorflow object detection API
  • TF-TRT-Models

Following on from this:

I now have everything installed but when I run a simple load and convert on an SSD mobilenet, I get the following error:

WARNING:tensorflow:TensorRT mismatch. Compiled against version 7.1.0, but loaded 7.1.3.

I’m not really sure what module/library it is referring to that is version 7.1.x

The code for this is:

from tf_trt_models.detection import download_detection_model
import tensorflow.contrib.tensorrt as trt
from tf_trt_models.detection import build_detection_graph

config_path, checkpoint_path = download_detection_model('ssd_inception_v2_coco')

frozen_graph, input_names, output_names = build_detection_graph(
    config=config_path,
    checkpoint=checkpoint_path
)


trt_graph = trt.create_inference_graph(
    input_graph_def=frozen_graph,
    outputs=output_names,
    max_batch_size=1,
    max_workspace_size_bytes=1 << 25,
    precision_mode='FP16',
    minimum_segment_size=50
)

The Dockerfile is:

ARG JETPACK_VERSION="r32.4.3"

#FROM nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-base
FROM mdegans/deepstream:aarch64-samples

# OPENCV *****************************************************************

### build argumements ###
# change these here or with --build-arg FOO="BAR" at build time, #"4.3.0"
ARG OPENCV_VERSION="master" 
ARG OPENCV_DO_TEST="FALSE"
# note: 8 jobs will fail on Nano. Try 1 instead.
ARG OPENCV_BUILD_JOBS="6"
# required for apt-get -y to work properly:
ARG DEBIAN_FRONTEND=noninteractive

WORKDIR /usr/local/src/build_opencv

COPY build_opencv.sh .

RUN /bin/bash build_opencv.sh

ARG JETPACK_VERSION="r32.4.3"

# TF 1.15 *****************************************************************

#
# setup environment
#
ENV DEBIAN_FRONTEND=noninteractive
ARG HDF5_DIR="/usr/lib/aarch64-linux-gnu/hdf5/serial/"
ARG MAKEFLAGS=-j6

RUN printenv


#
# install prerequisites - https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#prereqs
#
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
	      ca-certificates \
          python3-pip \
		  python3-dev \
		  gfortran \
		  build-essential \
		  liblapack-dev \ 
		  libblas-dev \
		  libhdf5-serial-dev \
		  hdf5-tools \
		  libhdf5-dev \
		  zlib1g-dev \
		  zip \
		  libjpeg8-dev \
    && rm -rf /var/lib/apt/lists/*


RUN pip3 install setuptools Cython wheel
RUN pip3 install numpy --verbose
RUN pip3 install h5py==2.10.0 --verbose
RUN pip3 install future==0.17.1 mock==3.0.5 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11 --verbose


#
# TensorFlow (for JetPack 4.4 DP)
#
#  TensorFlow 1.15 https://nvidia.box.com/shared/static/rummpy6q1km1wivomalpkwt2jy28mndf.whl (tensorflow-1.15.2+nv-cp36-cp36m-linux_aarch64.whl)
#
ARG TENSORFLOW_URL=https://nvidia.box.com/shared/static/rummpy6q1km1wivomalpkwt2jy28mndf.whl 
ARG TENSORFLOW_WHL=tensorflow-1.15.2+nv-cp36-cp36m-linux_aarch64.whl

RUN wget --quiet --show-progress --progress=bar:force:noscroll --no-check-certificate ${TENSORFLOW_URL} -O ${TENSORFLOW_WHL} && \
    pip3 install ${TENSORFLOW_WHL} --verbose && \
    rm ${TENSORFLOW_WHL}


# 
# PyCUDA
#
ENV PATH="/usr/local/cuda/bin:${PATH}"
ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
RUN echo "$PATH" && echo "$LD_LIBRARY_PATH"

RUN pip3 install pycuda --verbose

ARG JETPACK_VERSION="r32.4.3"

# TFOD API  *****************************************************************

# Install protobuf compiler
RUN apt-get update  && apt-get install -y --no-install-recommends ca-certificates autoconf automake libtool curl make g++ unzip git python3-dev python-setuptools python3-pip
RUN wget --quiet --show-progress --progress=bar:force:noscroll --no-check-certificate https://github.com/protocolbuffers/protobuf/releases/download/v3.12.3/protobuf-all-3.12.3.zip
RUN unzip protobuf-all-3.12.3.zip 
RUN ls
RUN cd protobuf-3.12.3
WORKDIR protobuf-3.12.3
RUN ls
RUN ./configure
RUN make
RUN make check
RUN make install
RUN ldconfig #

# Add new user to avoid running as root
RUN useradd -ms /bin/bash tensorflow
USER tensorflow
WORKDIR /home/tensorflow

RUN git clone https://github.com/tensorflow/models.git

# Copy this version of of the model garden into the image
COPY --chown=tensorflow . /home/tensorflow/models

# Compile protobuf configs
RUN (cd /home/tensorflow/models/research/ && protoc object_detection/protos/*.proto --python_out=.)
WORKDIR /home/tensorflow/models/research/

RUN cp object_detection/packages/tf1/setup.py ./
ENV PATH="/home/tensorflow/.local/bin:${PATH}"

#RUN python -m pip3 install --user -U pip
#RUN python -m pip3 install --user .
RUN pip3 install --user -U pip
RUN pip3 install --user .

ENV TF_CPP_MIN_LOG_LEVEL 3


# TF_TRT_MODELS *****************************************************************

#RUN useradd -ms /bin/bash xavier
USER tensorflow
RUN echo $HOME

#RUN apt update && apt install -y git python3-dev python-setuptools ca-certificates
RUN cd /home/tensorflow
WORKDIR /home/tensorflow
RUN git clone --recursive https://github.com/NVIDIA-Jetson/tf_trt_models.git
RUN cd /home/tensorflow/tf_trt_models
WORKDIR /home/tensorflow/tf_trt_models

RUN ./install.sh python3

Assuming I can get this to work, is there a more production ready image/method to add the NVIDIA apt sources to a base image rather than using one of mdegans?

Please help and many thanks in advance…

1 Like

Could you share your setup info with us?

I’m not entirely sure what you mean by setup, so if you need more info please let me know.

Our use case is that we currently deploy our object detection system to a Jetson Nano using models trained in tensorflow and then converted to TensorRT on the Nano. We are looking to migrate this to deepstream and so would like to have a development docker container that has what we need for our current deployment as well as deepstream.

We are currently using TF1.15.x (as the tensorflow object detection api did not support TF2.x until recently).

Thanks

I mean the deepstream sdk setup

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi jimwormold,

Have you managed to get building issue resolved?
Any result cane be shared?