Implementing DeepStream/ TRT integration by Intels scenario

Hi @_av,
I think, if you can use Gst-nvinferserver / Triton , it can accept pb file directly.

Sample - /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

1 Like

Lets try to get it running
Step 1. Running the container

 docker run -it --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /home/nvidia/gaze:/import nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-samples

I was able to locate the file:

root@nx:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps# ./deepstream-infer-tensor-meta-test/

It seems the issue narrows down to running Gst-nvinferserver

root@nx:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstrea
m-infer-tensor-meta-test# make
Makefile:25: *** "CUDA_VER is not set".  Stop.
root@nx:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstrea
m-infer-tensor-meta-test# 

@mchi would you be able to guide through the process of starting the Gst-nvinferserver?
Compilation steps found:

  sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
   libgstrtspserver-1.0-dev libx11-dev

Compilation Steps:
  $ cd apps/sample_apps/deepstream-infer-tensor-meta-test/
  # Export correct CUDA version (e.g. 10.2, 10.1)
  $ export CUDA_VER=10.2
  $ make
  $ ./deepstream-infer-tensor-meta-app -t <infer-type> <h264_elementary_stream>
    # <infer-type> is selected from "infer" or "inferserver"

Attempt #2

 make
g++ -c -o deepstream_infer_tensor_meta_test.o -fPIC -std=c++11 -I ../../../includes -I /usr/local/cuda-10.2/include `pkg-config --cflags gstreamer-1.0 opencv4` -DPLATFORM_TEGRA deepstream_infer_tensor_meta_test.cpp
Package opencv4 was not found in the pkg-config search path.
Perhaps you should add the directory containing `opencv4.pc'
to the PKG_CONFIG_PATH environment variable
No package 'opencv4' found
/bin/sh: 1: g++: not found
Makefile:69: recipe for target 'deepstream_infer_tensor_meta_test.o' failed
make: *** [deepstream_infer_tensor_meta_test.o] Error 127

opencv4 wasn’t mentioned in readme as pre-requisite?
at the device I have it at

/usr/include/opencv4/opencv2/video/legacy

shall I mount it or build it from scratch within the container?
adding opencv4

sudo apt-get install -y \
        build-essential \
        cmake \
        git \
        libavcodec-dev \
        libavresample-dev \
        libavformat-dev \
        libdc1394-22-dev \
        libgstreamer1.0-dev \
        libgtk2.0-dev \
        libjpeg-dev \
        libpng-dev \
        libswscale-dev \
        libtbb-dev \
        libtbb2 \
        libtiff-dev \
        libv4l-dev \
        pkg-config \
        python-dev \
        python-numpy \
        python3-dev \
        python3-numpy

wget https://github.com/opencv/opencv/archive/4.4.0.zip
wget https://github.com/opencv/opencv_contrib/archive/4.4.0.tar.gz

unzip 4.4.0.zip
cd opencv4.4.0
mkdir build
cd build
cmake -D WITH_CUDA=ON -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -D CUDA_ARCH_BIN="7.2" -D CUDA_ARCH_PTX="" -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF  -D  BUILD_opencv_python3=yes  -D PYTHON3_LIBRARY=/usr/lib/python3.6/config-3.6m-aarch64-linux-gnu/libpython3.6m.so  -D BUILD_opencv_cudacodec=OFF -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.4.0/modules -D OPENCV_GENERATE_PKGCONFIG=ON  ..

make -j6
make install

Hi @_av,
Sorry! Could you use python sample - deepstream-ssd-parser ?

For Jetson, below is the steps to run it directly on Jetson system instead of docker.
If Jetson system is installed via SDKManager, there is OpenCV4 by default.

Jetson system installed opencv 4 by default, could you use Jetson system directly instead of docker?

Steps:
1. Install Python3
1.1. Install python3.6
sudo apt install python3.6
sudo apt install python3-pip
1.2. Switch to python3.6
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2
$ sudo update-alternatives --config python
python --version
2. Prepare models (root permission)
# cd /opt/nvidia/deepstream/deepstream/samples/
# ./prepare_ds_trtis_model_repo.sh
3. Install python DS (refer to https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps)
# cd /opt/nvidia/deepstream/deepstream/lib
# python3 setup.py install
# cd /opt/nvidia/deepstream/deepstream/sources
# git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
4. Install prerequisite according to the README under /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser/
5. Prepare models
# cd /opt/nvidia/deepstream/deepstream/samples/
# ./prepare_ds_trtis_model_repo.sh
6. Run
# cd /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser/
# LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py …/…/…/…/samples/streams/sample_720p.h264

@mchi,
Thank you for following up!
within the container I was able to run

/deepstream-infer-tensor-meta-app 
With tracker
Usage: ./deepstream-infer-tensor-meta-app [-t infer-type]<elementary H264 file 1> ... <elementary H264 file n>
     -t infer-type: select form [infer, inferserver], infer by default

I shall try the the scenario proposed by you above.
It doesn’t seem to require running ./deepstream-infer-tensor-meta-app
could you extend, at which step do we provide .pb file as input, please?
Thank you very much!
Following steps above,
2. Prepare models (root permission)

Generating Engine files for CaffeModels provided with the SDK
etc.
Model repository prepared successfully.
  1. Install DS python:
    Here I reach the first limitation due to the container use;
    there is no file setup.py
python3 setup.py install
root@nx:/opt/nvidia/deepstream/deepstream/lib# ls
gst-plugins               libnvds_amqp_proto.so        libnvds_dewarper.so           libnvds_inferutils.so   libnvds_mot_klt.so        libnvds_opticalflow_jetson.so  libnvdsgst_smartrecord.so
libiothub_client.so       libnvds_azure_edge_proto.so  libnvds_dsanalytics.so        libnvds_kafka_proto.so  libnvds_msgconv.so        libnvds_osd.so                 libtrtserver.so
libiothub_client.so.1     libnvds_azure_proto.so       libnvds_infer.so              libnvds_logger.so       libnvds_msgconv.so.1.0.0  libnvds_utils.so               libvpi.so.0.0.2.1
libnvbufsurface.so        libnvds_batch_jpegenc.so     libnvds_infer_server.so       libnvds_meta.so         libnvds_nvdcf.so          libnvdsgst_helper.so           tensorflow
libnvbufsurftransform.so  libnvds_csvparser.so         libnvds_infercustomparser.so  libnvds_mot_iou.so      libnvds_nvtxhelper.so     libnvdsgst_meta.so

why doesn’t the DS5 docker container have the setup.py?
from this point I should seek how to add DS to the Jetson OS that has been flashed with Jetpack, but did not get Deepstream via the Jetpack due to the headless jetpack installation probably
Installing DeepStream to nx

@nx:~$ sudo dpkg -i deepstream-5.0_5.0.0-1_arm64.deb
sudo dpkg -i deepstream-5.0_5.0.0-1_arm64.deb 
Selecting previously unselected package deepstream-5.0.
(Reading database ... 248093 files and directories currently installed.)
Preparing to unpack deepstream-5.0_5.0.0-1_arm64.deb ...
Unpacking deepstream-5.0 (5.0.0-1) ...
I just noticesd that there is no setup.py on system wide DS5 installation either
@nx:/opt/nvidia/deepstream/deepstream/lib$ ls
gst-plugins                   libnvds_infer.so
libiothub_client.so           libnvds_inferutils.so
libiothub_client.so.1         libnvds_kafka_proto.so
libnvbufsurface.so            libnvds_logger.so
libnvbufsurftransform.so      libnvds_meta.so
libnvds_amqp_proto.so         libnvds_mot_iou.so
libnvds_azure_edge_proto.so   libnvds_mot_klt.so
libnvds_azure_proto.so        libnvds_msgconv.so
libnvds_batch_jpegenc.so      libnvds_msgconv.so.1.0.0
libnvds_csvparser.so          libnvds_nvdcf.so
libnvds_dewarper.so           libnvds_nvtxhelper.so
libnvds_dsanalytics.so        libnvds_opticalflow_jetson.so
libnvdsgst_helper.so          libnvds_osd.so
libnvdsgst_meta.so            libnvds_utils.so
libnvdsgst_smartrecord.so     libtrtserver.so
libnvds_infercustomparser.so  libvpi.so.0.0.2.1
libnvds_infer_server.so       tensorflow

@mchi? whereto get the setup.py file for finishing the step3?
Upd: got it;
proceeding with the step 3 as follows:

cd /opt/nvidia/deepstream/deepstream/sources
git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

but the readme as for now suggests to run apps, it won’t add the missed setup.py mentioned before
how do I get from here to the step below?

python3 setup.py

installing python-gi & gst-python
installed; still no setup.py in the lib folder

this can not be done;
neither on docker nor on system wide instalaltion of DS

Ok, I guess it’s because my DS5.0 was installed by tar package, for your DS installed by deb or docker image, you could skip it

pyds seems installable with
pip3 install pyds
unless installed it would throw error;
once installed it would still throw errors:

/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser$  LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
2020-09-15 11:20:05.494474: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Opening in BLOCKING MODE 
I0915 15:20:06.213483 10785 server.cc:120] Initializing Triton Inference Server
I0915 15:20:06.234297 10785 server_status.cc:55] New status tracking for model 'ssd_inception_v2_coco_2018_01_28'
I0915 15:20:06.235407 10785 model_repository_manager.cc:680] loading: ssd_inception_v2_coco_2018_01_28:1
I0915 15:20:06.236562 10785 base_backend.cc:176] Creating instance ssd_inception_v2_coco_2018_01_28_0_0_gpu0 on GPU 0 (7.2) using model.graphdef
2020-09-15 11:20:06.316502: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-15 11:20:06.317483: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xd430400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-15 11:20:06.317804: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-15 11:20:06.318521: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-15 11:20:06.319076: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:06.319479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-15 11:20:06.320127: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-15 11:20:06.320494: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-15 11:20:06.339104: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-15 11:20:06.377522: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-15 11:20:06.396102: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-15 11:20:06.412596: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-15 11:20:06.412814: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-15 11:20:06.413005: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:06.413214: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:06.413309: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-09-15 11:20:15.456085: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-15 11:20:15.456194: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-09-15 11:20:15.456314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-09-15 11:20:15.456611: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:15.457781: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:15.458015: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:15.458147: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3108 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-09-15 11:20:15.462824: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7eac76d6d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-15 11:20:15.462934: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
I0915 15:20:17.178477 10785 model_repository_manager.cc:837] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
2020-09-15 11:20:27.674362: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-15 11:20:44.404016: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
End-of-stream
I0915 15:24:30.328406 10785 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1
I0915 15:24:31.386002 10785 model_repository_manager.cc:816] successfully unloaded 'ssd_inception_v2_coco_2018_01_28' version 1
I0915 15:24:31.415066 10785 server.cc:179] Waiting for in-flight inferences to complete.
I0915 15:24:31.415574 10785 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests
End-of-stream
I0915 15:24:30.328406 10785 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1
I0915 15:24:31.386002 10785 model_repository_manager.cc:816] successfully unloaded 'ssd_inception_v2_coco_2018_01_28' version 1
I0915 15:24:31.415066 10785 server.cc:179] Waiting for in-flight inferences to complete.
I0915 15:24:31.415574 10785 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests
Segmentation fault (core dumped)

seems pyds version mismatch?
maybe possible to run the same without python?
by using
deepstream-infer-tensor-meta-test?
where in steps executed to provide the file as input?
https://storage.googleapis.com/gaze-dev/model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
could you also extend what folder does this statement below reffer to please?

 Run the docker with this Python Bindings directory mapped

source README from /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser

For below DS/JP version, please use DS GA docker instead of DP docker image

• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
4.4

in tests above both docker & system wide version were tried

However, thank you for following up!
I shall try with docker GA
but which py bindings folder to mount? means pyds?

For x86_64 and Jetson Docker:
  1. Use the provided docker container and follow directions for
     Triton Inference Server in the SDK README --
     be sure to prepare the detector models.
  2. Run the docker with this Python Bindings directory mapped
  3. Install required Python packages inside the container:
     $ apt install
     $ apt install python3-gi python3-dev python3-gst-1.0 python3-numpy -y

what is the python bindings folder?
at system wide should I use tar gz deepstream in order to get py bindings?

for dockerized attempt
I. which pybind do I mount as a folder to the lattest container? pyds pip installation folder? pybind11-dev folder?
for non dockerized attampt
II. Shall I reinstall deepstream at system wide jetson instalaltion in order to get setup.py for python DS from tar?
for non python attampt
III. For docker/non docker is there a chance to do same without python but with ./deepstream-infer-tensor-meta-app
IV. in any I-III scenarios above where to specify input pb file for processing?
Thank you very much
P.S. It seems I might haven’t had pybind11-dev package at system wide nx environment. that could be the issue why there were no setup.py in the /lib folder?
Update:
Yes, I got it;
in the GA container I got the setup.py file.

DS version alighed to GA both in Docker & System wide.
pyds version also reinstalled with use of python3 setup.py install
System wide execution shows:

@nx:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser$ LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
2020-09-16 04:33:04.103328: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Opening in BLOCKING MODE 
I0916 08:33:05.176450 18822 server.cc:120] Initializing Triton Inference Server
I0916 08:33:05.185105 18822 server_status.cc:55] New status tracking for model 'ssd_inception_v2_coco_2018_01_28'
I0916 08:33:05.185645 18822 model_repository_manager.cc:680] loading: ssd_inception_v2_coco_2018_01_28:1
I0916 08:33:05.186497 18822 base_backend.cc:176] Creating instance ssd_inception_v2_coco_2018_01_28_0_0_gpu0 on GPU 0 (7.2) using model.graphdef
2020-09-16 04:33:05.255986: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-16 04:33:05.256928: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3bde9850 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-16 04:33:05.257199: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-16 04:33:05.257783: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-16 04:33:05.258277: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.258738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-16 04:33:05.259377: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-16 04:33:05.259713: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-16 04:33:05.317918: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-16 04:33:05.407164: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-16 04:33:05.512466: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-16 04:33:05.562609: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-16 04:33:05.563453: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-16 04:33:05.563990: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.564500: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.564781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-09-16 04:33:14.622199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-16 04:33:14.622324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-09-16 04:33:14.622381: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-09-16 04:33:14.622609: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.622957: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.623215: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.623403: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3108 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-09-16 04:33:14.628323: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ebc07c7d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-16 04:33:14.628471: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
I0916 08:33:16.293165 18822 model_repository_manager.cc:837] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
2020-09-16 04:33:25.287758: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-16 04:33:35.965736: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Frame Number=0 Number of Objects=5 Vehicle_count=2 Person_count=2
Frame Number=1 Number of Objects=5 Vehicle_count=2 Person_count=2
End-of-stream
I0916 08:37:59.947849 18822 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1
I0916 08:38:01.067126 18822 model_repository_manager.cc:816] successfully unloaded 'ssd_inception_v2_coco_2018_01_28' version 1
I0916 08:38:01.069230 18822 server.cc:179] Waiting for in-flight inferences to complete.
I0916 08:38:01.069668 18822 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests

thank you for the update

. Run the docker with Python Bindings mapped using the following option:
   -v <path to this python bindings directory>:/opt/nvidia/deepstream/deepstream-5.0/sources/python

From system wide DS5GA

/usr/bin/deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 06:22:07.740562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...

Using winsys: x11 
Opening in BLOCKING MODE 
ERROR: failed to read path :inferserver/dstensor_sgie3_config.txt
0:00:00.748249328 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver_impl.cpp:387:start:<secondary3-nvinference-engine> error: Configuration file read failed
0:00:00.748309459 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver_impl.cpp:387:start:<secondary3-nvinference-engine> error: Config file path: inferserver/dstensor_sgie3_config.txt
0:00:00.748399705 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<secondary3-nvinference-engine> error: gstnvinferserver_impl start failed
Running...
ERROR from element secondary3-nvinference-engine: Configuration file read failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(387): start (): /GstPipeline:dstensor-pipeline/GstNvInferServer:secondary3-nvinference-engine:
Config file path: inferserver/dstensor_sgie3_config.txt
Returned, stopping playback
Deleting pipeline

from app built from sources

/apps/sample_apps/deepstream-infer-tensor-meta-test$ ./deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 06:25:11.619103: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...

Using winsys: x11 
Opening in BLOCKING MODE 
I0916 10:25:11.804289 18999 server.cc:120] Initializing Triton Inference Server
I0916 10:25:11.822408 18999 server_status.cc:55] New status tracking for model 'Secondary_VehicleTypes'
E0916 10:25:11.822566 18999 model_repository_manager.cc:1139] failed to load model 'Secondary_VehicleTypes': at least one version must be available under the version policy of model 'Secondary_VehicleTypes'
ERROR: TRTIS: failed to load model Secondary_VehicleTypes, trtis_err_str:INTERNAL, err_msg:failed to load 'Secondary_VehicleTypes', no version is available
ERROR: failed to load model: Secondary_VehicleTypes, nvinfer error:NVDSINFER_TRTIS_ERROR
ERROR: failed to initialize backend while ensuring model:Secondary_VehicleTypes ready, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:00.765766985 18999   0x5591e4fef0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<secondary3-nvinference-engine> nvinferserver[UID 4]: Error in createNNBackend() <infer_trtis_context.cpp:223> [UID = 4]: failed to initialize trtis backend for model:Secondary_VehicleTypes, nvinfer error:NVDSINFER_TRTIS_ERROR
I0916 10:25:11.822948 18999 server.cc:179] Waiting for in-flight inferences to complete.
I0916 10:25:11.822981 18999 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests
0:00:00.765945268 18999   0x5591e4fef0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<secondary3-nvinference-engine> nvinferserver[UID 4]: Error in initialize() <infer_base_context.cpp:78> [UID = 4]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:00.765973621 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<secondary3-nvinference-engine> error: Failed to initialize InferTrtIsContext
0:00:00.765991927 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<secondary3-nvinference-engine> error: Config file path: inferserver/dstensor_sgie3_config.txt
0:00:00.766075964 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<secondary3-nvinference-engine> error: gstnvinferserver_impl start failed
Running...
ERROR from element secondary3-nvinference-engine: Failed to initialize InferTrtIsContext
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(439): start (): /GstPipeline:dstensor-pipeline/GstNvInferServer:secondary3-nvinference-engine:
Config file path: inferserver/dstensor_sgie3_config.txt
Returned, stopping playback
Deleting pipeline

I can run this finally, on system wide DS GA

./deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 07:40:30.459983: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...


Thank you very much. It also plays & displays the vidio,
but how to process custom pb?
Python implementation seems to process the video but won’t show any ooutput as video, just text output.
The only difference between python implementation

LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py  /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

VS the non-python

 ./deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

Is that the latter will draw any video outputs, but the former wont.
Python version doesn’t support the video window pop up probably or it is just a bug because I am using usb-c display?

this is the sample with ssd pb file.
please check the README and code in this sample.

please check the code, python output video file,.

Thank you for following up;
for gstreamer I have to add nvoverlaysink display-id=2 often, for video output as I am on USB-C display. probably I need to add them to the python gstreamer section of some file?
Moreover, I shall locate the README & code to find any clue on how to input custom pb

which readme?
this readme?
It doesn’t tell anything about using own pb.
It just tells how to set up pre-requisites & triton server

/deepstream-ssd-parser$ cat README 

@mchi could you extend how exactly load custom .pb file with triton server, please?
prefferable without the python , but using the triton inference app, if possible? python just adds extra complication as it seemms to me that is not presented when using the c version of the triton inference
also with python?
so at leasty one of the approaces will hopefully work;
UPD: python version doesn’t show video, but it writes video output.

  1. As below step mentioned previously

    1. Prepare models

    cd /opt/nvidia/deepstream/deepstream/samples/

    ./prepare_ds_trtis_model_repo.sh

  2. dstest_ssd_nopostprocess.txt under deepstream-ssd-parser

infer_config {
unique_id: 5
gpu_ids: [0]
max_batch_size: 4
backend {
trt_is {
model_name: "ssd_inception_v2_coco_2018_01_28"
version: -1
model_repo {
root: "…/…/…/…/samples/trtis_model_repo"
log_level: 2
tf_gpu_memory_fraction: 0.6
tf_disable_soft_placement: 0
}
}
}

@mchi,
Hi, Thank you for your response
However, ./prepare_ds_trtis_model_repo.sh seems to download some pre-defined models.
My intention was to load custom pb , e.g https://storage.googleapis.com/gaze-dev/model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb

This is a reference sample, you can refer to this sample to inference with your pb.

@mchi
could you guide through modifying the sample in order to be able to use custom pb file with it, please?

1 Like