I am creating a pgie
using one of your example files. I am getting the error:
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:00.298985586 4129 0x3a502a0 WARN nvinferserver gstnvinferserver_impl.cpp:284:validatePluginConfig:<primary-inference> warning: Configuration file batch-size reset to: 16
0:00:00.299038736 4129 0x3a502a0 WARN nvinferserver gstnvinferserver_impl.cpp:290:validatePluginConfig:<primary-inference> warning: Configuration file unique-id reset to: 1
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
I1229 16:40:40.690139 4129 metrics.cc:290] Collecting metrics for GPU 0: Tesla T4
I1229 16:40:40.940519 4129 libtorch.cc:1029] TRITONBACKEND_Initialize: pytorch
I1229 16:40:40.940553 4129 libtorch.cc:1039] Triton TRITONBACKEND API version: 1.4
I1229 16:40:40.940560 4129 libtorch.cc:1045] 'pytorch' TRITONBACKEND API version: 1.4
2021-12-29 16:40:41.047776: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
I1229 16:40:41.090509 4129 tensorflow.cc:2169] TRITONBACKEND_Initialize: tensorflow
I1229 16:40:41.090538 4129 tensorflow.cc:2179] Triton TRITONBACKEND API version: 1.4
I1229 16:40:41.090547 4129 tensorflow.cc:2185] 'tensorflow' TRITONBACKEND API version: 1.4
I1229 16:40:41.090554 4129 tensorflow.cc:2209] backend configuration:
{"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.400000"}}
I1229 16:40:41.092294 4129 onnxruntime.cc:1970] TRITONBACKEND_Initialize: onnxruntime
I1229 16:40:41.092315 4129 onnxruntime.cc:1980] Triton TRITONBACKEND API version: 1.4
I1229 16:40:41.092325 4129 onnxruntime.cc:1986] 'onnxruntime' TRITONBACKEND API version: 1.4
I1229 16:40:41.111152 4129 openvino.cc:1193] TRITONBACKEND_Initialize: openvino
I1229 16:40:41.111171 4129 openvino.cc:1203] Triton TRITONBACKEND API version: 1.4
I1229 16:40:41.111178 4129 openvino.cc:1209] 'openvino' TRITONBACKEND API version: 1.4
I1229 16:40:41.220996 4129 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7fde14000000' with size 268435456
I1229 16:40:41.221340 4129 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I1229 16:40:41.221992 4129 server.cc:504]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I1229 16:40:41.222053 4129 server.cc:543]
+-------------+-------------------------------+-------------------------------+
| Backend | Path | Config |
+-------------+-------------------------------+-------------------------------+
| tensorrt | <built-in> | {} |
| pytorch | /opt/tritonserver/backends/py | {} |
| | torch/libtriton_pytorch.so | |
| tensorflow | /opt/tritonserver/backends/te | {"cmdline":{"allow-soft-place |
| | nsorflow1/libtriton_tensorflo | ment":"true","gpu-memory-frac |
| | w1.so | tion":"0.400000"}} |
| onnxruntime | /opt/tritonserver/backends/on | {} |
| | nxruntime/libtriton_onnxrunti | |
| | me.so | |
| openvino | /opt/tritonserver/backends/op | {} |
| | envino/libtriton_openvino.so | |
+-------------+-------------------------------+-------------------------------+
I1229 16:40:41.222083 4129 server.cc:586]
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+
I1229 16:40:41.222157 4129 tritonserver.cc:1718]
+----------------------------------+------------------------------------------+
| Option | Value |
+----------------------------------+------------------------------------------+
| server_id | triton |
| server_version | 2.13.0 |
| server_extensions | classification sequence model_repository |
| | model_repository(unload_dependents) sch |
| | edule_policy model_configuration system_ |
| | shared_memory cuda_shared_memory binary_ |
| | tensor_data statistics |
| model_repository_path[0] | /opt/nvidia/deepstream/deepstream-6.0/sa |
| | mples/triton_model_repo |
| model_control_mode | MODE_EXPLICIT |
| strict_model_config | 0 |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
+----------------------------------+------------------------------------------+
W1229 16:40:41.223100 4129 autofill.cc:237] Autofiller failed to retrieve model. Error Details: Internal: unable to autofill for 'ssd_inception_v2_coco_2018_01_28' due to no version directories
W1229 16:40:41.223121 4129 autofill.cc:243] Proceeding with simple config for now
E1229 16:40:41.223469 4129 model_repository_manager.cc:1424] failed to load model 'ssd_inception_v2_coco_2018_01_28': at least one version must be available under the version policy of model 'ssd_inception_v2_coco_2018_01_28'
ERROR: infer_trtis_server.cpp:1053 Triton: failed to load model ssd_inception_v2_coco_2018_01_28, triton_err_str:Invalid argument, err_msg:load failed for model 'ssd_inception_v2_coco_2018_01_28': at least one version must be available under the version policy of model 'ssd_inception_v2_coco_2018_01_28'
ERROR: infer_trtis_backend.cpp:45 failed to load model: ssd_inception_v2_coco_2018_01_28, nvinfer error:NVDSINFER_TRITON_ERROR
ERROR: infer_trtis_backend.cpp:184 failed to initialize backend while ensuring model:ssd_inception_v2_coco_2018_01_28 ready, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:00.875159940 4129 0x3a502a0 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 1]: Error in createNNBackend() <infer_trtis_context.cpp:248> [UID = 1]: failed to initialize triton backend for model:ssd_inception_v2_coco_2018_01_28, nvinfer error:NVDSINFER_TRITON_ERROR
I1229 16:40:41.223593 4129 server.cc:234] Waiting for in-flight requests to complete.
I1229 16:40:41.223602 4129 server.cc:249] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
0:00:00.875259827 4129 0x3a502a0 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:81> [UID = 1]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:00.875276063 4129 0x3a502a0 WARN nvinferserver gstnvinferserver_impl.cpp:507:start:<primary-inference> error: Failed to initialize InferTrtIsContext
0:00:00.875285488 4129 0x3a502a0 WARN nvinferserver gstnvinferserver_impl.cpp:507:start:<primary-inference> error: Config file path: /home/ubuntu/pycharm/projects/components/dstest1_pgie_inferserver_config.txt
0:00:00.875745113 4129 0x3a502a0 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary-inference> error: gstnvinferserver_impl start failed
TOT FPS 0.0 AVG FPS 0 STREAMS UP 0
[NvMultiObjectTracker] De-initialized
Warning: gst-library-error-quark: Configuration file batch-size reset to: 16 (5): gstnvinferserver_impl.cpp(284): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Warning: gst-library-error-quark: Configuration file unique-id reset to: 1 (5): gstnvinferserver_impl.cpp(290): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(507): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
Config file path: /home/ubuntu/pycharm/projects//dstest1_pgie_inferserver_config.txt
Process finished with exit code 0
Same code works if I use a custom model that I created.
To initialize the pgie
:
pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference")
pgie.set_property("config-file-path",
"/home/ubuntu/pycharm/projects/dstest1_pgie_inferserver_config.txt")
The config file for the pgie
is a copy-paste from this file deepstream_python_apps/dstest1_pgie_inferserver_config.txt at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub . I just replaced the relative paths with absolute paths.
For completeness:
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################
infer_config {
unique_id: 5
gpu_ids: [0]
max_batch_size: 4
backend {
trt_is {
model_name: "ssd_inception_v2_coco_2018_01_28"
version: -1
model_repo {
root: "/opt/nvidia/deepstream/deepstream-6.0/samples/trtis_model_repo"
log_level: 2
tf_gpu_memory_fraction: 0.4
tf_disable_soft_placement: 0
}
}
}
preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_NONE
maintain_aspect_ratio: 0
normalize {
scale_factor: 1.0
channel_offsets: [0, 0, 0]
}
}
postprocess {
labelfile_path: "/opt/nvidia/deepstream/deepstream-6.0/samples/trtis_model_repo/ssd_inception_v2_coco_2018_01_28/labels.txt"
detection {
num_detected_classes: 91
custom_parse_bbox_func: "NvDsInferParseCustomTfSSD"
nms {
confidence_threshold: 0.5
iou_threshold: 0.3
topk : 20
}
}
}
extra {
copy_input_to_host_buffers: false
}
custom_lib {
path: "/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so"
}
}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
interval: 0
}
output_control {
output_tensor_meta: true
}
Dockerfile:
FROM nvcr.io/nvidia/deepstream:6.0-triton
ENV GIT_SSL_NO_VERIFY=1
RUN sh docker_python_setup.sh
RUN update-alternatives --set python3 /usr/bin/python3.8
RUN apt install --fix-broken -y
RUN apt -y install python3-gi python3-gst-1.0 python-gi-dev git python3 python3-pip cmake g++ build-essential \
libglib2.0-dev python3-dev python3.8-dev libglib2.0-dev-bin python-gi-dev libtool m4 autoconf automake
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps && \
git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps && \
git submodule update --init
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/3rdparty/gst-python/ && \
./autogen.sh && \
make && \
make install
RUN pip3 install --upgrade pip
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps/bindings && \
mkdir build && \
cd build && \
cmake -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8 -DPIP_PLATFORM=linux_x86_64 -DDS_PATH=/opt/nvidia/deepstream/deepstream-6.0 .. && \
make && \
pip3 install pyds-1.1.0-py3-none-linux_x86_64.whl
RUN cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/deepstream_python_apps && \
mv apps/* ./
RUN pip3 install --upgrade pip
RUN pip3 install numpy opencv-python
# RTSP
RUN apt update && \
apt install -y python3-gi python3-dev python3-gst-1.0
RUN apt update && \
apt install -y libgstrtspserver-1.0-0 gstreamer1.0-rtsp && \
apt install -y libgirepository1.0-dev && \
apt-get install -y gobject-introspection gir1.2-gst-rtsp-server-1.0
# DEVELOPMENT TOOLS
RUN apt install -y ipython3 graphviz