Implementing DeepStream/ TRT integration by Intels scenario

It is an effort to incorporate Intel’s model into TRT/Deepstream,
so that it would run within NGC container.
It is attempted in a hope it wil help to resolve another issue discussed in the threadfrom that the topic has derived
Given there is .engine file & h5, how to incorporate it into Deepstream? - #17 by mchi

scenario : Detecting Diabetic Retinopathy Using Deep Learning on Intel®...
objective: do conversion & integration into DeepStream /TRT

The attempt will be conducted on NX devkit with Jetpack 4.4GA;
Containers that will be used to try the integration of the model are ml-tensorflow& Deepstream 5 ffor l4t.

Hi @Andrey1984,
Please provide the setup info as other topic.

Steps:

  1. convert the pytorch model to onnx, for exmpale
    $ ```
    python3 -m tf2onnx.convert --input test_eval.pb --output test_eval.onnx --inputs ‘time_distributed_1_input:0’ --outputs ‘dense_3/BiasAdd:0’ --opset 11
2. configure the gie config with the onnx, and implement the detection post processing
3. inference with video or image as input for DeepStream
1 Like

Hi @mchi,
Could you extend what do you mean saying "

" , please?


However, following the Intels article: attempt #1.

git clone https://github.com/javathunderman/diabetic-retinopathy-screening
cd diabetic-retinopathy-screening/
git clone https://github.com/Nomikxyz/retinopathy-dataset
mkdir images
cd images
mkdir diseased
mkdir nondiseased
cd ..

Step 0:

python3 retrain.py   --bottleneck_dir=bottlenecks   --how_many_training_steps=300   --model_dir=inception   --output_graph=retrained_graph.pb   --output_labels=retrained_labels.txt   --image_dir=images/

Outcomes of execution of step 0 resulted in tw ofiles:

-rw-rw-r-- 1 1000 1000 87436548 Sep  3 22:56 retrained_graph.pb
-rw-rw-r-- 1 1000 1000       21 Sep  3 22:56 retrained_labels.txt

https://storage.googleapis.com/gaze-dev/retrained_labels.txt
https://storage.googleapis.com/gaze-dev/retrained_graph.pb
Step 1:

`
How do I rewrite the given line for the resulting pb file? @mchi? could you help adopting the provided example to the given inception v3 retrained model, please?
Which arguments to pass to the script? but for the input & outpput?

/import/diabetic-retinopathy-screening# python3 -m tf2onnx.convert --input retrained_graph.pb --output retrained_graph.pb.onnx
2020-09-03 23:27:09.936934: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-09-03 23:27:16,189 - WARNING - From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-09-03 23:27:16.284934: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-03 23:27:16.303278: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:16.303465: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1634] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-03 23:27:16.303553: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-03 23:27:16.323004: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-03 23:27:16.339354: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-03 23:27:16.377049: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-03 23:27:16.394572: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-03 23:27:16.410651: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-03 23:27:16.413606: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-03 23:27:16.413928: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:16.414285: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:16.414418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1762] Adding visible gpu devices: 0
2020-09-03 23:27:16.441055: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-03 23:27:16.441742: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xfd54b70 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-03 23:27:16.441805: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-03 23:27:16.608665: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:16.609341: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x11393ae0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-03 23:27:16.609422: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
2020-09-03 23:27:16.609801: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:16.609936: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1634] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-03 23:27:16.610012: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-03 23:27:16.610068: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-03 23:27:16.610112: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-03 23:27:16.610164: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-03 23:27:16.610207: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-03 23:27:16.610246: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-03 23:27:16.610286: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-03 23:27:16.610429: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:16.610665: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:16.610745: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1762] Adding visible gpu devices: 0
2020-09-03 23:27:16.610866: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-03 23:27:22.960352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1175] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-03 23:27:22.960511: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181]      0 
2020-09-03 23:27:22.960576: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1194] 0:   N 
2020-09-03 23:27:22.961009: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:22.961508: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-03 23:27:22.961708: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1320] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3852 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-09-03 23:27:24.153539: W tensorflow/core/framework/op_def_util.cc:357] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py", line 171, in <module>
    main()
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py", line 125, in main
    graph_def, inputs, outputs = tf_loader.from_graphdef(args.graphdef, args.inputs, args.outputs)
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tf_loader.py", line 150, in from_graphdef
    frozen_graph = freeze_session(sess, input_names=input_names, output_names=output_names)
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tf_loader.py", line 113, in freeze_session
    output_node_names = [i.split(':')[:-1][0] for i in output_names]
TypeError: 'NoneType' object is not iterable

like below info

• Hardware Platform (Jetson / GPU)
Jeson Nano, RTX 2060
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
4.4
• TensorRT Version
7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
440

the first post has been updated earlier;
So it can be seen from it so that NX devkit is used;
OS version 4.4GA Jetpack.
container: nvcr.io/nvidia/l4t-tensorflow:r32.4.3-tf2.2-py3
TRT version is the default version to the 4.4GA & docker container. I will try to find a command to retrieve version information from them. Should it be something like trttexec --version?

training a model on a top of the provided images with Google AI resulted in following outcomes
https://storage.googleapis.com/gaze-dev/model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
The question is still how to sort out arguments for tf2onnx conversion?

It doesn’t appear that Google AI pb file converts:

python3 -m tf2onnx.convert --input model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb --output model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.onnx
2020-09-09 01:10:05.757937: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-09-09 01:10:13,493 - WARNING - From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2020-09-09 01:10:13.552905: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-09 01:10:13.562878: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:13.563061: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1634] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-09 01:10:13.563151: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-09 01:10:13.643033: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-09 01:10:13.721577: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-09 01:10:13.833017: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-09 01:10:13.969823: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-09 01:10:14.050905: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-09 01:10:14.051776: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-09 01:10:14.052726: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:14.053019: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:14.053120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1762] Adding visible gpu devices: 0
2020-09-09 01:10:14.084200: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-09 01:10:14.086109: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xf1d9b20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-09 01:10:14.087209: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-09 01:10:14.258334: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:14.260268: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x10817f20 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-09 01:10:14.261386: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
2020-09-09 01:10:14.262925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:14.264119: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1634] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-09 01:10:14.264376: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-09 01:10:14.264957: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-09 01:10:14.265051: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-09 01:10:14.265120: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-09 01:10:14.265186: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-09 01:10:14.265724: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-09 01:10:14.265818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-09 01:10:14.266055: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:14.266866: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:14.267550: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1762] Adding visible gpu devices: 0
2020-09-09 01:10:14.268168: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-09 01:10:20.229699: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1175] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-09 01:10:20.229857: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181]      0 
2020-09-09 01:10:20.229925: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1194] 0:   N 
2020-09-09 01:10:20.230462: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:20.231777: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:952] ARM64 does not support NUMA - returning NUMA node zero
2020-09-09 01:10:20.232059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1320] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4453 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py", line 171, in <module>
    main()
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py", line 125, in main
    graph_def, inputs, outputs = tf_loader.from_graphdef(args.graphdef, args.inputs, args.outputs)
  File "/usr/local/lib/python3.6/dist-packages/tf2onnx/tf_loader.py", line 147, in from_graphdef
    graph_def.ParseFromString(f.read())
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/message.py", line 199, in ParseFromString
    return self.MergeFromString(serialized)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1134, in MergeFromString
    if self._InternalParse(serialized, 0, length) != length:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 738, in DecodeField
    if value._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 717, in DecodeRepeatedField
    if value.add()._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1201, in InternalParse
    pos = field_decoder(buffer, new_pos, end, self, field_dict)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 872, in DecodeMap
    if submsg._InternalParse(buffer, pos, new_pos) != new_pos:
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/python_message.py", line 1188, in InternalParse
    buffer, new_pos, wire_type)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 973, in _DecodeUnknownField
    (data, pos) = _DecodeUnknownFieldSet(buffer, pos)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 952, in _DecodeUnknownFieldSet
    (data, pos) = _DecodeUnknownField(buffer, pos, wire_type)
  File "/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/decoder.py", line 977, in _DecodeUnknownField
    raise _DecodeError('Wrong wire type in tag.')
google.protobuf.message.DecodeError: Wrong wire type in tag.

Hi @Andrey1984,
I think, if you can use Gst-nvinferserver / Triton , it can accept pb file directly.

Sample - /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

1 Like

Lets try to get it running
Step 1. Running the container

 docker run -it --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0 -v /tmp/.X11-unix/:/tmp/.X11-unix -v /home/nvidia/gaze:/import nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-samples

I was able to locate the file:

root@nx:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps# ./deepstream-infer-tensor-meta-test/

It seems the issue narrows down to running Gst-nvinferserver

root@nx:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstrea
m-infer-tensor-meta-test# make
Makefile:25: *** "CUDA_VER is not set".  Stop.
root@nx:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstrea
m-infer-tensor-meta-test# 

@mchi would you be able to guide through the process of starting the Gst-nvinferserver?
Compilation steps found:

  sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
   libgstrtspserver-1.0-dev libx11-dev

Compilation Steps:
  $ cd apps/sample_apps/deepstream-infer-tensor-meta-test/
  # Export correct CUDA version (e.g. 10.2, 10.1)
  $ export CUDA_VER=10.2
  $ make
  $ ./deepstream-infer-tensor-meta-app -t <infer-type> <h264_elementary_stream>
    # <infer-type> is selected from "infer" or "inferserver"

Attempt #2

 make
g++ -c -o deepstream_infer_tensor_meta_test.o -fPIC -std=c++11 -I ../../../includes -I /usr/local/cuda-10.2/include `pkg-config --cflags gstreamer-1.0 opencv4` -DPLATFORM_TEGRA deepstream_infer_tensor_meta_test.cpp
Package opencv4 was not found in the pkg-config search path.
Perhaps you should add the directory containing `opencv4.pc'
to the PKG_CONFIG_PATH environment variable
No package 'opencv4' found
/bin/sh: 1: g++: not found
Makefile:69: recipe for target 'deepstream_infer_tensor_meta_test.o' failed
make: *** [deepstream_infer_tensor_meta_test.o] Error 127

opencv4 wasn’t mentioned in readme as pre-requisite?
at the device I have it at

/usr/include/opencv4/opencv2/video/legacy

shall I mount it or build it from scratch within the container?
adding opencv4

sudo apt-get install -y \
        build-essential \
        cmake \
        git \
        libavcodec-dev \
        libavresample-dev \
        libavformat-dev \
        libdc1394-22-dev \
        libgstreamer1.0-dev \
        libgtk2.0-dev \
        libjpeg-dev \
        libpng-dev \
        libswscale-dev \
        libtbb-dev \
        libtbb2 \
        libtiff-dev \
        libv4l-dev \
        pkg-config \
        python-dev \
        python-numpy \
        python3-dev \
        python3-numpy

wget https://github.com/opencv/opencv/archive/4.4.0.zip
wget https://github.com/opencv/opencv_contrib/archive/4.4.0.tar.gz

unzip 4.4.0.zip
cd opencv4.4.0
mkdir build
cd build
cmake -D WITH_CUDA=ON -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -D CUDA_ARCH_BIN="7.2" -D CUDA_ARCH_PTX="" -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF  -D  BUILD_opencv_python3=yes  -D PYTHON3_LIBRARY=/usr/lib/python3.6/config-3.6m-aarch64-linux-gnu/libpython3.6m.so  -D BUILD_opencv_cudacodec=OFF -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.4.0/modules -D OPENCV_GENERATE_PKGCONFIG=ON  ..

make -j6
make install

Hi @Andrey1984,
Sorry! Could you use python sample - deepstream-ssd-parser ?

For Jetson, below is the steps to run it directly on Jetson system instead of docker.
If Jetson system is installed via SDKManager, there is OpenCV4 by default.

Jetson system installed opencv 4 by default, could you use Jetson system directly instead of docker?

Steps:
1. Install Python3
1.1. Install python3.6
sudo apt install python3.6
sudo apt install python3-pip
1.2. Switch to python3.6
$ sudo update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1
$ sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2
$ sudo update-alternatives --config python
python --version
2. Prepare models (root permission)
# cd /opt/nvidia/deepstream/deepstream/samples/
# ./prepare_ds_trtis_model_repo.sh
3. Install python DS (refer to deepstream_python_apps/apps at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub)
# cd /opt/nvidia/deepstream/deepstream/lib
# python3 setup.py install
# cd /opt/nvidia/deepstream/deepstream/sources
# git clone GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications
4. Install prerequisite according to the README under /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser/
5. Prepare models
# cd /opt/nvidia/deepstream/deepstream/samples/
# ./prepare_ds_trtis_model_repo.sh
6. Run
# cd /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser/
# LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py …/…/…/…/samples/streams/sample_720p.h264

@mchi,
Thank you for following up!
within the container I was able to run

/deepstream-infer-tensor-meta-app 
With tracker
Usage: ./deepstream-infer-tensor-meta-app [-t infer-type]<elementary H264 file 1> ... <elementary H264 file n>
     -t infer-type: select form [infer, inferserver], infer by default

I shall try the the scenario proposed by you above.
It doesn’t seem to require running ./deepstream-infer-tensor-meta-app
could you extend, at which step do we provide .pb file as input, please?
Thank you very much!
Following steps above,
2. Prepare models (root permission)

Generating Engine files for CaffeModels provided with the SDK
etc.
Model repository prepared successfully.
  1. Install DS python:
    Here I reach the first limitation due to the container use;
    there is no file setup.py
python3 setup.py install
root@nx:/opt/nvidia/deepstream/deepstream/lib# ls
gst-plugins               libnvds_amqp_proto.so        libnvds_dewarper.so           libnvds_inferutils.so   libnvds_mot_klt.so        libnvds_opticalflow_jetson.so  libnvdsgst_smartrecord.so
libiothub_client.so       libnvds_azure_edge_proto.so  libnvds_dsanalytics.so        libnvds_kafka_proto.so  libnvds_msgconv.so        libnvds_osd.so                 libtrtserver.so
libiothub_client.so.1     libnvds_azure_proto.so       libnvds_infer.so              libnvds_logger.so       libnvds_msgconv.so.1.0.0  libnvds_utils.so               libvpi.so.0.0.2.1
libnvbufsurface.so        libnvds_batch_jpegenc.so     libnvds_infer_server.so       libnvds_meta.so         libnvds_nvdcf.so          libnvdsgst_helper.so           tensorflow
libnvbufsurftransform.so  libnvds_csvparser.so         libnvds_infercustomparser.so  libnvds_mot_iou.so      libnvds_nvtxhelper.so     libnvdsgst_meta.so

why doesn’t the DS5 docker container have the setup.py?
from this point I should seek how to add DS to the Jetson OS that has been flashed with Jetpack, but did not get Deepstream via the Jetpack due to the headless jetpack installation probably
Installing DeepStream to nx

@nx:~$ sudo dpkg -i deepstream-5.0_5.0.0-1_arm64.deb
sudo dpkg -i deepstream-5.0_5.0.0-1_arm64.deb 
Selecting previously unselected package deepstream-5.0.
(Reading database ... 248093 files and directories currently installed.)
Preparing to unpack deepstream-5.0_5.0.0-1_arm64.deb ...
Unpacking deepstream-5.0 (5.0.0-1) ...
I just noticesd that there is no setup.py on system wide DS5 installation either
@nx:/opt/nvidia/deepstream/deepstream/lib$ ls
gst-plugins                   libnvds_infer.so
libiothub_client.so           libnvds_inferutils.so
libiothub_client.so.1         libnvds_kafka_proto.so
libnvbufsurface.so            libnvds_logger.so
libnvbufsurftransform.so      libnvds_meta.so
libnvds_amqp_proto.so         libnvds_mot_iou.so
libnvds_azure_edge_proto.so   libnvds_mot_klt.so
libnvds_azure_proto.so        libnvds_msgconv.so
libnvds_batch_jpegenc.so      libnvds_msgconv.so.1.0.0
libnvds_csvparser.so          libnvds_nvdcf.so
libnvds_dewarper.so           libnvds_nvtxhelper.so
libnvds_dsanalytics.so        libnvds_opticalflow_jetson.so
libnvdsgst_helper.so          libnvds_osd.so
libnvdsgst_meta.so            libnvds_utils.so
libnvdsgst_smartrecord.so     libtrtserver.so
libnvds_infercustomparser.so  libvpi.so.0.0.2.1
libnvds_infer_server.so       tensorflow

@mchi? whereto get the setup.py file for finishing the step3?
Upd: got it;
proceeding with the step 3 as follows:

cd /opt/nvidia/deepstream/deepstream/sources
git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

but the readme as for now suggests to run apps, it won’t add the missed setup.py mentioned before
how do I get from here to the step below?

python3 setup.py

installing python-gi & gst-python
installed; still no setup.py in the lib folder

this can not be done;
neither on docker nor on system wide instalaltion of DS

Ok, I guess it’s because my DS5.0 was installed by tar package, for your DS installed by deb or docker image, you could skip it

pyds seems installable with
pip3 install pyds
unless installed it would throw error;
once installed it would still throw errors:

/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser$  LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
2020-09-15 11:20:05.494474: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Opening in BLOCKING MODE 
I0915 15:20:06.213483 10785 server.cc:120] Initializing Triton Inference Server
I0915 15:20:06.234297 10785 server_status.cc:55] New status tracking for model 'ssd_inception_v2_coco_2018_01_28'
I0915 15:20:06.235407 10785 model_repository_manager.cc:680] loading: ssd_inception_v2_coco_2018_01_28:1
I0915 15:20:06.236562 10785 base_backend.cc:176] Creating instance ssd_inception_v2_coco_2018_01_28_0_0_gpu0 on GPU 0 (7.2) using model.graphdef
2020-09-15 11:20:06.316502: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-15 11:20:06.317483: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xd430400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-15 11:20:06.317804: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-15 11:20:06.318521: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-15 11:20:06.319076: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:06.319479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-15 11:20:06.320127: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-15 11:20:06.320494: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-15 11:20:06.339104: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-15 11:20:06.377522: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-15 11:20:06.396102: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-15 11:20:06.412596: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-15 11:20:06.412814: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-15 11:20:06.413005: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:06.413214: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:06.413309: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-09-15 11:20:15.456085: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-15 11:20:15.456194: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-09-15 11:20:15.456314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-09-15 11:20:15.456611: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:15.457781: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:15.458015: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-15 11:20:15.458147: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3108 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-09-15 11:20:15.462824: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7eac76d6d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-15 11:20:15.462934: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
I0915 15:20:17.178477 10785 model_repository_manager.cc:837] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
2020-09-15 11:20:27.674362: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-15 11:20:44.404016: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
Traceback (most recent call last):
Traceback (most recent call last):
  File "deepstream_ssd_parser.py", line 236, in pgie_src_pad_buffer_probe
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
AttributeError: module 'pyds' has no attribute 'gst_buffer_get_nvds_batch_meta'
End-of-stream
I0915 15:24:30.328406 10785 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1
I0915 15:24:31.386002 10785 model_repository_manager.cc:816] successfully unloaded 'ssd_inception_v2_coco_2018_01_28' version 1
I0915 15:24:31.415066 10785 server.cc:179] Waiting for in-flight inferences to complete.
I0915 15:24:31.415574 10785 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests
End-of-stream
I0915 15:24:30.328406 10785 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1
I0915 15:24:31.386002 10785 model_repository_manager.cc:816] successfully unloaded 'ssd_inception_v2_coco_2018_01_28' version 1
I0915 15:24:31.415066 10785 server.cc:179] Waiting for in-flight inferences to complete.
I0915 15:24:31.415574 10785 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests
Segmentation fault (core dumped)

seems pyds version mismatch?
maybe possible to run the same without python?
by using
deepstream-infer-tensor-meta-test?
where in steps executed to provide the file as input?
https://storage.googleapis.com/gaze-dev/model-555139022817591296_tf-saved-model_2020-09-08T00_12_38.738Z_saved_model.pb
could you also extend what folder does this statement below reffer to please?

 Run the docker with this Python Bindings directory mapped

source README from /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser

For below DS/JP version, please use DS GA docker instead of DP docker image

• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
4.4

in tests above both docker & system wide version were tried

However, thank you for following up!
I shall try with docker GA
but which py bindings folder to mount? means pyds?

For x86_64 and Jetson Docker:
  1. Use the provided docker container and follow directions for
     Triton Inference Server in the SDK README --
     be sure to prepare the detector models.
  2. Run the docker with this Python Bindings directory mapped
  3. Install required Python packages inside the container:
     $ apt install
     $ apt install python3-gi python3-dev python3-gst-1.0 python3-numpy -y

what is the python bindings folder?
at system wide should I use tar gz deepstream in order to get py bindings?

for dockerized attempt
I. which pybind do I mount as a folder to the lattest container? pyds pip installation folder? pybind11-dev folder?
for non dockerized attampt
II. Shall I reinstall deepstream at system wide jetson instalaltion in order to get setup.py for python DS from tar?
for non python attampt
III. For docker/non docker is there a chance to do same without python but with ./deepstream-infer-tensor-meta-app
IV. in any I-III scenarios above where to specify input pb file for processing?
Thank you very much
P.S. It seems I might haven’t had pybind11-dev package at system wide nx environment. that could be the issue why there were no setup.py in the /lib folder?
Update:
Yes, I got it;
in the GA container I got the setup.py file.

DS version alighed to GA both in Docker & System wide.
pyds version also reinstalled with use of python3 setup.py install
System wide execution shows:

@nx:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser$ LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
2020-09-16 04:33:04.103328: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Opening in BLOCKING MODE 
I0916 08:33:05.176450 18822 server.cc:120] Initializing Triton Inference Server
I0916 08:33:05.185105 18822 server_status.cc:55] New status tracking for model 'ssd_inception_v2_coco_2018_01_28'
I0916 08:33:05.185645 18822 model_repository_manager.cc:680] loading: ssd_inception_v2_coco_2018_01_28:1
I0916 08:33:05.186497 18822 base_backend.cc:176] Creating instance ssd_inception_v2_coco_2018_01_28_0_0_gpu0 on GPU 0 (7.2) using model.graphdef
2020-09-16 04:33:05.255986: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-16 04:33:05.256928: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3bde9850 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-16 04:33:05.257199: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-16 04:33:05.257783: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-16 04:33:05.258277: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.258738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-16 04:33:05.259377: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-16 04:33:05.259713: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-16 04:33:05.317918: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-16 04:33:05.407164: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-16 04:33:05.512466: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-16 04:33:05.562609: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-16 04:33:05.563453: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-16 04:33:05.563990: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.564500: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.564781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-09-16 04:33:14.622199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-16 04:33:14.622324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-09-16 04:33:14.622381: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-09-16 04:33:14.622609: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.622957: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.623215: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.623403: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3108 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-09-16 04:33:14.628323: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ebc07c7d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-16 04:33:14.628471: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
I0916 08:33:16.293165 18822 model_repository_manager.cc:837] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
2020-09-16 04:33:25.287758: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-16 04:33:35.965736: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Frame Number=0 Number of Objects=5 Vehicle_count=2 Person_count=2
Frame Number=1 Number of Objects=5 Vehicle_count=2 Person_count=2
End-of-stream
I0916 08:37:59.947849 18822 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1
I0916 08:38:01.067126 18822 model_repository_manager.cc:816] successfully unloaded 'ssd_inception_v2_coco_2018_01_28' version 1
I0916 08:38:01.069230 18822 server.cc:179] Waiting for in-flight inferences to complete.
I0916 08:38:01.069668 18822 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests

thank you for the update

. Run the docker with Python Bindings mapped using the following option:
   -v <path to this python bindings directory>:/opt/nvidia/deepstream/deepstream-5.0/sources/python

From system wide DS5GA

/usr/bin/deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 06:22:07.740562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...

Using winsys: x11 
Opening in BLOCKING MODE 
ERROR: failed to read path :inferserver/dstensor_sgie3_config.txt
0:00:00.748249328 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver_impl.cpp:387:start:<secondary3-nvinference-engine> error: Configuration file read failed
0:00:00.748309459 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver_impl.cpp:387:start:<secondary3-nvinference-engine> error: Config file path: inferserver/dstensor_sgie3_config.txt
0:00:00.748399705 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<secondary3-nvinference-engine> error: gstnvinferserver_impl start failed
Running...
ERROR from element secondary3-nvinference-engine: Configuration file read failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(387): start (): /GstPipeline:dstensor-pipeline/GstNvInferServer:secondary3-nvinference-engine:
Config file path: inferserver/dstensor_sgie3_config.txt
Returned, stopping playback
Deleting pipeline

from app built from sources

/apps/sample_apps/deepstream-infer-tensor-meta-test$ ./deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 06:25:11.619103: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...

Using winsys: x11 
Opening in BLOCKING MODE 
I0916 10:25:11.804289 18999 server.cc:120] Initializing Triton Inference Server
I0916 10:25:11.822408 18999 server_status.cc:55] New status tracking for model 'Secondary_VehicleTypes'
E0916 10:25:11.822566 18999 model_repository_manager.cc:1139] failed to load model 'Secondary_VehicleTypes': at least one version must be available under the version policy of model 'Secondary_VehicleTypes'
ERROR: TRTIS: failed to load model Secondary_VehicleTypes, trtis_err_str:INTERNAL, err_msg:failed to load 'Secondary_VehicleTypes', no version is available
ERROR: failed to load model: Secondary_VehicleTypes, nvinfer error:NVDSINFER_TRTIS_ERROR
ERROR: failed to initialize backend while ensuring model:Secondary_VehicleTypes ready, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:00.765766985 18999   0x5591e4fef0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<secondary3-nvinference-engine> nvinferserver[UID 4]: Error in createNNBackend() <infer_trtis_context.cpp:223> [UID = 4]: failed to initialize trtis backend for model:Secondary_VehicleTypes, nvinfer error:NVDSINFER_TRTIS_ERROR
I0916 10:25:11.822948 18999 server.cc:179] Waiting for in-flight inferences to complete.
I0916 10:25:11.822981 18999 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests
0:00:00.765945268 18999   0x5591e4fef0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<secondary3-nvinference-engine> nvinferserver[UID 4]: Error in initialize() <infer_base_context.cpp:78> [UID = 4]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:00.765973621 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<secondary3-nvinference-engine> error: Failed to initialize InferTrtIsContext
0:00:00.765991927 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<secondary3-nvinference-engine> error: Config file path: inferserver/dstensor_sgie3_config.txt
0:00:00.766075964 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<secondary3-nvinference-engine> error: gstnvinferserver_impl start failed
Running...
ERROR from element secondary3-nvinference-engine: Failed to initialize InferTrtIsContext
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(439): start (): /GstPipeline:dstensor-pipeline/GstNvInferServer:secondary3-nvinference-engine:
Config file path: inferserver/dstensor_sgie3_config.txt
Returned, stopping playback
Deleting pipeline

I can run this finally, on system wide DS GA

./deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 07:40:30.459983: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...


Thank you very much. It also plays & displays the vidio,
but how to process custom pb?
Python implementation seems to process the video but won’t show any ooutput as video, just text output.
The only difference between python implementation

LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py  /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

VS the non-python

 ./deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264

Is that the latter will draw any video outputs, but the former wont.
Python version doesn’t support the video window pop up probably or it is just a bug because I am using usb-c display?