Implementing DeepStream/ TRT integration by Intels scenario

DS version alighed to GA both in Docker & System wide.
pyds version also reinstalled with use of python3 setup.py install
System wide execution shows:

@nx:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-ssd-parser$ LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
2020-09-16 04:33:04.103328: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Opening in BLOCKING MODE 
I0916 08:33:05.176450 18822 server.cc:120] Initializing Triton Inference Server
I0916 08:33:05.185105 18822 server_status.cc:55] New status tracking for model 'ssd_inception_v2_coco_2018_01_28'
I0916 08:33:05.185645 18822 model_repository_manager.cc:680] loading: ssd_inception_v2_coco_2018_01_28:1
I0916 08:33:05.186497 18822 base_backend.cc:176] Creating instance ssd_inception_v2_coco_2018_01_28_0_0_gpu0 on GPU 0 (7.2) using model.graphdef
2020-09-16 04:33:05.255986: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-16 04:33:05.256928: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3bde9850 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-16 04:33:05.257199: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-09-16 04:33:05.257783: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-16 04:33:05.258277: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.258738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.109
pciBusID: 0000:00:00.0
2020-09-16 04:33:05.259377: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-09-16 04:33:05.259713: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-16 04:33:05.317918: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-16 04:33:05.407164: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-16 04:33:05.512466: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-16 04:33:05.562609: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-16 04:33:05.563453: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-16 04:33:05.563990: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.564500: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:05.564781: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
2020-09-16 04:33:14.622199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-16 04:33:14.622324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
2020-09-16 04:33:14.622381: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
2020-09-16 04:33:14.622609: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.622957: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.623215: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] ARM64 does not support NUMA - returning NUMA node zero
2020-09-16 04:33:14.623403: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3108 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)
2020-09-16 04:33:14.628323: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ebc07c7d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-16 04:33:14.628471: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Xavier, Compute Capability 7.2
I0916 08:33:16.293165 18822 model_repository_manager.cc:837] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
2020-09-16 04:33:25.287758: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8
2020-09-16 04:33:35.965736: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Frame Number=0 Number of Objects=5 Vehicle_count=2 Person_count=2
Frame Number=1 Number of Objects=5 Vehicle_count=2 Person_count=2
End-of-stream
I0916 08:37:59.947849 18822 model_repository_manager.cc:708] unloading: ssd_inception_v2_coco_2018_01_28:1
I0916 08:38:01.067126 18822 model_repository_manager.cc:816] successfully unloaded 'ssd_inception_v2_coco_2018_01_28' version 1
I0916 08:38:01.069230 18822 server.cc:179] Waiting for in-flight inferences to complete.
I0916 08:38:01.069668 18822 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests

thank you for the update

. Run the docker with Python Bindings mapped using the following option:
   -v <path to this python bindings directory>:/opt/nvidia/deepstream/deepstream-5.0/sources/python

From system wide DS5GA

/usr/bin/deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 06:22:07.740562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...

Using winsys: x11 
Opening in BLOCKING MODE 
ERROR: failed to read path :inferserver/dstensor_sgie3_config.txt
0:00:00.748249328 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver_impl.cpp:387:start:<secondary3-nvinference-engine> error: Configuration file read failed
0:00:00.748309459 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver_impl.cpp:387:start:<secondary3-nvinference-engine> error: Config file path: inferserver/dstensor_sgie3_config.txt
0:00:00.748399705 18860     0x3930d8f0 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<secondary3-nvinference-engine> error: gstnvinferserver_impl start failed
Running...
ERROR from element secondary3-nvinference-engine: Configuration file read failed
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(387): start (): /GstPipeline:dstensor-pipeline/GstNvInferServer:secondary3-nvinference-engine:
Config file path: inferserver/dstensor_sgie3_config.txt
Returned, stopping playback
Deleting pipeline

from app built from sources

/apps/sample_apps/deepstream-infer-tensor-meta-test$ ./deepstream-infer-tensor-meta-app -t inferserver /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
With tracker
2020-09-16 06:25:11.619103: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
Now playing...

Using winsys: x11 
Opening in BLOCKING MODE 
I0916 10:25:11.804289 18999 server.cc:120] Initializing Triton Inference Server
I0916 10:25:11.822408 18999 server_status.cc:55] New status tracking for model 'Secondary_VehicleTypes'
E0916 10:25:11.822566 18999 model_repository_manager.cc:1139] failed to load model 'Secondary_VehicleTypes': at least one version must be available under the version policy of model 'Secondary_VehicleTypes'
ERROR: TRTIS: failed to load model Secondary_VehicleTypes, trtis_err_str:INTERNAL, err_msg:failed to load 'Secondary_VehicleTypes', no version is available
ERROR: failed to load model: Secondary_VehicleTypes, nvinfer error:NVDSINFER_TRTIS_ERROR
ERROR: failed to initialize backend while ensuring model:Secondary_VehicleTypes ready, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:00.765766985 18999   0x5591e4fef0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<secondary3-nvinference-engine> nvinferserver[UID 4]: Error in createNNBackend() <infer_trtis_context.cpp:223> [UID = 4]: failed to initialize trtis backend for model:Secondary_VehicleTypes, nvinfer error:NVDSINFER_TRTIS_ERROR
I0916 10:25:11.822948 18999 server.cc:179] Waiting for in-flight inferences to complete.
I0916 10:25:11.822981 18999 server.cc:194] Timeout 30: Found 0 live models and 0 in-flight requests
0:00:00.765945268 18999   0x5591e4fef0 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<secondary3-nvinference-engine> nvinferserver[UID 4]: Error in initialize() <infer_base_context.cpp:78> [UID = 4]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR
0:00:00.765973621 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<secondary3-nvinference-engine> error: Failed to initialize InferTrtIsContext
0:00:00.765991927 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver_impl.cpp:439:start:<secondary3-nvinference-engine> error: Config file path: inferserver/dstensor_sgie3_config.txt
0:00:00.766075964 18999   0x5591e4fef0 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<secondary3-nvinference-engine> error: gstnvinferserver_impl start failed
Running...
ERROR from element secondary3-nvinference-engine: Failed to initialize InferTrtIsContext
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinferserver/gstnvinferserver_impl.cpp(439): start (): /GstPipeline:dstensor-pipeline/GstNvInferServer:secondary3-nvinference-engine:
Config file path: inferserver/dstensor_sgie3_config.txt
Returned, stopping playback
Deleting pipeline