Processed video without bounding boxes for Gst-nvinferserver plug-in with DeepStream6.4 pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Tesla T4
• DeepStream Version: 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): question
Hi, I tried to use Gst-nvinferserver plug-in with DeepStream pipeline for the resnet18 _ trafficcamnet model provided in deepstream 6.4 tag . I tried running the pipeline inside the docker container. It is processed and I couldn’t get an detection bounding boxes in the
processed video. I got the output like this in the terminal when I run the pipeline,

cc -o pipeline pipeline.o `pkg-config --libs gstreamer-1.0` -L/opt/nvidia/deepstream/deepstream-6.4/lib/ -lnvdsgst_helper -lm -lnvdsgst_meta -L/usr/local/cuda-12.2/lib64/ -lcudart -lcuda -Wl,-rpath,/opt/nvidia/deepstream/deepstream-6.4/lib/ -lgio-2.0
root@ip-192-168-2-157:/opt/nvidia/deepstream/deepstream-6.4/fast-api# ./pipeline

(pipeline:807): GStreamer-WARNING **: 10:25:17.432: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.

(pipeline:807): GStreamer-WARNING **: 10:25:17.432: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory

(pipeline:807): GLib-GObject-WARNING **: 10:25:17.529: g_object_set_is_valid_property: object class 'GstNvInferServer' has no property named 'gpu-id'
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
I0207 10:25:18.037424 807 libtorch.cc:2507] TRITONBACKEND_Initialize: pytorch
I0207 10:25:18.037458 807 libtorch.cc:2517] Triton TRITONBACKEND API version: 1.15
I0207 10:25:18.037468 807 libtorch.cc:2523] 'pytorch' TRITONBACKEND API version: 1.15
I0207 10:25:18.158514 807 pinned_memory_manager.cc:241] Pinned memory pool is created at '0x7f9184000000' with size 268435456
I0207 10:25:18.158826 807 cuda_memory_manager.cc:107] CUDA memory pool is created on device 0 with size 67108864
I0207 10:25:18.159525 807 server.cc:604] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0207 10:25:18.159612 807 server.cc:631] 
+---------+---------------------------------------------------------+--------+
| Backend | Path                                                    | Config |
+---------+---------------------------------------------------------+--------+
| pytorch | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so | {}     |
+---------+---------------------------------------------------------+--------+

I0207 10:25:18.159651 807 server.cc:674] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+

I0207 10:25:18.206014 807 metrics.cc:810] Collecting metrics for GPU 0: Tesla T4
I0207 10:25:18.206267 807 metrics.cc:703] Collecting CPU metrics
I0207 10:25:18.206413 807 tritonserver.cc:2435] 
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                  |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                 |
| server_version                   | 2.37.0                                                                                                 |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_con |
|                                  | figuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logg |
|                                  | ing                                                                                                    |
| model_repository_path[0]         | /opt/nvidia/deepstream/deepstream-6.4/samples/triton_model_repo                                        |
| model_control_mode               | MODE_EXPLICIT                                                                                          |
| strict_model_config              | 0                                                                                                      |
| rate_limit                       | OFF                                                                                                    |
| pinned_memory_pool_byte_size     | 268435456                                                                                              |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                               |
| min_supported_compute_capability | 6.0                                                                                                    |
| strict_readiness                 | 1                                                                                                      |
| exit_timeout                     | 30                                                                                                     |
| cache_enabled                    | 0                                                                                                      |
+----------------------------------+--------------------------------------------------------------------------------------------------------+

I0207 10:25:18.207718 807 model_lifecycle.cc:462] loading: Primary_Detector:1
I0207 10:25:18.208495 807 tensorrt.cc:65] TRITONBACKEND_Initialize: tensorrt
I0207 10:25:18.208516 807 tensorrt.cc:75] Triton TRITONBACKEND API version: 1.15
I0207 10:25:18.208526 807 tensorrt.cc:81] 'tensorrt' TRITONBACKEND API version: 1.15
I0207 10:25:18.208536 807 tensorrt.cc:105] backend configuration:
{"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}}
I0207 10:25:18.208894 807 tensorrt.cc:222] TRITONBACKEND_ModelInitialize: Primary_Detector (version 1)
I0207 10:25:18.217228 807 logging.cc:46] Loaded engine size: 1 MiB
I0207 10:25:18.225064 807 logging.cc:46] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +3, now: CPU 0, GPU 3 (MiB)
I0207 10:25:18.227332 807 tensorrt.cc:288] TRITONBACKEND_ModelInstanceInitialize: Primary_Detector_0_0 (GPU device 0)
I0207 10:25:18.228184 807 logging.cc:46] Loaded engine size: 1 MiB
I0207 10:25:18.232071 807 logging.cc:46] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +3, now: CPU 0, GPU 3 (MiB)
I0207 10:25:18.234056 807 logging.cc:46] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +241, now: CPU 0, GPU 244 (MiB)
I0207 10:25:18.234588 807 instance_state.cc:188] Created instance Primary_Detector_0_0 on GPU 0 with stream priority 0 and optimization profile default[0];
I0207 10:25:18.234857 807 model_lifecycle.cc:819] successfully loaded 'Primary_Detector'
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:1 initialized model: Primary_Detector
Now playing: (null)
Deepstream Pipeline is Running...
New file created: file:///opt/nvidia/deepstream/deepstream-6.4/fast-api/tmp/video_630kb.mp4
Calling Start 0 
creating uridecodebin for [file:///opt/nvidia/deepstream/deepstream-6.4/fast-api/tmp/video_630kb.mp4]
decodebin child added source
decodebin child added decodebin0
STATE CHANGE ASYNC

decodebin child added qtdemux0
decodebin child added multiqueue0
decodebin child added h264parse0
decodebin child added capsfilter0
decodebin child added nvv4l2decoder0
decodebin new pad video/x-raw
Decodebin linked to pipeline
nvstreammux: Successfully handled EOS for source_id=0

And my nvinferserver plugin configurations are like this,

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    trt_is {
      model_name: "Primary_Detector"
      version: -1
      model_repo {
        root: "../../samples/trtis_model_repo"
        log_level: 2
        tf_gpu_memory_fraction: 0.4
        tf_disable_soft_placement: 0
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NONE
    maintain_aspect_ratio: 0
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "./labels.txt"
    detection {
      num_detected_classes: 1
      nms {
        confidence_threshold: 0.3
        iou_threshold: 0.5
        topk : 20
      }
    }
  }

  extra {
    copy_input_to_host_buffers: false
  }

}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  interval: 0
}
output_control {
  output_tensor_meta: true
}

I need to know if my configurations are wrong, or if I need to add any configurations other than this and how? to get this pipeline working properly

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Are you able to run deepstream-test1 successfully?

You can use nvinferserver for inference by modifying the value of pgie_type to NVDS_GIE_PLUGIN_INFER_SERVER.

This sample provides usage of models and configuration files.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.