Deepstream with triton is stuck and not outputting anything

• Hardware Platform (Jetson / GPU) → NVIDIA GeForce GTX 1650
• DeepStream Version → 6.1-triron
• JetPack Version (valid for Jetson only) N/A
• TensorRT Version → NA
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA driver 510.85.02

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.85.02    Driver Version: 510.85.02    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0  On |                  N/A |
| N/A   53C    P3    15W /  N/A |    895MiB /  4096MiB |     15%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1672      G   /usr/lib/xorg/Xorg                 69MiB |
|    0   N/A  N/A    352982      G   /usr/lib/xorg/Xorg                467MiB |
|    0   N/A  N/A    353212      G   /usr/bin/gnome-shell               58MiB |
|    0   N/A  N/A    353851      G   ...AAAAAAAAA= --shared-files       38MiB |
|    0   N/A  N/A    358772      G   ...878537524551413047,131072      119MiB |
|    0   N/A  N/A    369298      G   ...RendererForSitePerProcess       91MiB |
|    0   N/A  N/A    539011      G   ...AAAAAAAAA= --shared-files       16MiB |
|    0   N/A  N/A    571607      G   ...649167501785457681,131072       19MiB |
+-----------------------------------------------------------------------------+

• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,
I’ve been trying to deploy yolov5 with onnxruntime backend using deepstream and triton

Here are the inputs you may need,

Image used:

nvcr.io/nvidia/deepstream     6.1-triton                 28776904eac1 

Main file is: deepstream_yolo.py
deepstream_yolo.py (15.5 KB)

yolo_parser.py for post-processing
yolo_parser.py (11.4 KB)

nms.py
nms.py (3.7 KB)

yolov5 model configuration
yolov5_nopostprocess.txt (811 Bytes)

Also the pbtxt file:
config.pbtxt (306 Bytes)

And the labels file:
yolov5_labels.txt (629 Bytes)

After running the application:

python3 deepstream_yolo.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_1080p_h264.mp4 

This is the output I’m getting:

deepstream_yolo.py:281: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: https://wiki.gnome.org/PyGObject/Threading
  GObject.threads_init()
Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_1080p_h264.mp4 
Adding elements to Pipeline 

Linking elements in the Pipeline 

deepstream_yolo.py:397: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
  loop = GObject.MainLoop()
Starting pipeline 

WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
I0831 13:53:46.753136 1354 libtorch.cc:1309] TRITONBACKEND_Initialize: pytorch
I0831 13:53:46.753158 1354 libtorch.cc:1319] Triton TRITONBACKEND API version: 1.8
I0831 13:53:46.753163 1354 libtorch.cc:1325] 'pytorch' TRITONBACKEND API version: 1.8
2022-08-31 13:53:46.826130: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-08-31 13:53:46.852126: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
I0831 13:53:46.852177 1354 tensorflow.cc:2176] TRITONBACKEND_Initialize: tensorflow
I0831 13:53:46.852189 1354 tensorflow.cc:2186] Triton TRITONBACKEND API version: 1.8
I0831 13:53:46.852193 1354 tensorflow.cc:2192] 'tensorflow' TRITONBACKEND API version: 1.8
I0831 13:53:46.852197 1354 tensorflow.cc:2216] backend configuration:
{"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.800000"}}
I0831 13:53:46.870268 1354 onnxruntime.cc:2319] TRITONBACKEND_Initialize: onnxruntime
I0831 13:53:46.870281 1354 onnxruntime.cc:2329] Triton TRITONBACKEND API version: 1.8
I0831 13:53:46.870284 1354 onnxruntime.cc:2335] 'onnxruntime' TRITONBACKEND API version: 1.8
I0831 13:53:46.870286 1354 onnxruntime.cc:2365] backend configuration:
{}
I0831 13:53:46.898280 1354 openvino.cc:1207] TRITONBACKEND_Initialize: openvino
I0831 13:53:46.898296 1354 openvino.cc:1217] Triton TRITONBACKEND API version: 1.8
I0831 13:53:46.898300 1354 openvino.cc:1223] 'openvino' TRITONBACKEND API version: 1.8
I0831 13:53:46.955826 1354 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f0654000000' with size 268435456
I0831 13:53:46.955982 1354 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0831 13:53:46.956281 1354 server.cc:524] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0831 13:53:46.956319 1354 server.cc:551] 
+-------------+-------------------------------------------------------------------------+------------------------------------------------------------------------------+
| Backend     | Path                                                                    | Config                                                                       |
+-------------+-------------------------------------------------------------------------+------------------------------------------------------------------------------+
| pytorch     | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so                 | {}                                                                           |
| tensorflow  | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so         | {"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.800000"}} |
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so         | {}                                                                           |
| openvino    | /opt/tritonserver/backends/openvino_2021_4/libtriton_openvino_2021_4.so | {}                                                                           |
+-------------+-------------------------------------------------------------------------+------------------------------------------------------------------------------+

I0831 13:53:46.956333 1354 server.cc:594] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+

I0831 13:53:46.983453 1354 metrics.cc:651] Collecting metrics for GPU 0: NVIDIA GeForce GTX 1650
I0831 13:53:46.983729 1354 tritonserver.cc:1962] 
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                                              |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                                             |
| server_version                   | 2.20.0                                                                                                                                             |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_ |
|                                  | memory binary_tensor_data statistics trace                                                                                                         |
| model_repository_path[0]         | /opt/nvidia/deepstream/deepstream-6.1/sources/project                                                                                              |
| model_control_mode               | MODE_EXPLICIT                                                                                                                                      |
| strict_model_config              | 0                                                                                                                                                  |
| rate_limit                       | OFF                                                                                                                                                |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                                          |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                                                                           |
| response_cache_byte_size         | 0                                                                                                                                                  |
| min_supported_compute_capability | 6.0                                                                                                                                                |
| strict_readiness                 | 1                                                                                                                                                  |
| exit_timeout                     | 30                                                                                                                                                 |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------+

I0831 13:53:46.984552 1354 model_repository_manager.cc:997] loading: yolov5:1
I0831 13:53:47.084792 1354 onnxruntime.cc:2400] TRITONBACKEND_ModelInitialize: yolov5 (version 1)
I0831 13:53:47.085489 1354 onnxruntime.cc:614] skipping model configuration auto-complete for 'yolov5': inputs and outputs already specified
I0831 13:53:47.086393 1354 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: yolov5 (GPU device 0)
W0831 13:53:47.986790 1354 metrics.cc:427] Unable to get power limit for GPU 0. Status:Success, value:0.000000
I0831 13:53:48.817166 1354 model_repository_manager.cc:1152] successfully loaded 'yolov5' version 1
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:5 initialized model: yolov5
W0831 13:53:48.986964 1354 metrics.cc:427] Unable to get power limit for GPU 0. Status:Success, value:0.000000
W0831 13:53:49.988451 1354 metrics.cc:427] Unable to get power limit for GPU 0. Status:Success, value:0.000000
Killed

The script could run for hours if left without killing it, and the output is always a file of size 0 bytes.

EDIT:
I also included the output of :

 nvidia-smi -q -d POWER

Here:

==============NVSMI LOG==============

Timestamp                                 : Wed Aug 31 16:24:15 2022
Driver Version                            : 510.85.02
CUDA Version                              : 11.6

Attached GPUs                             : 1
GPU 00000000:01:00.0
    Power Readings
        Power Management                  : N/A
        Power Draw                        : 6.55 W
        Power Limit                       : N/A
        Default Power Limit               : N/A
        Enforced Power Limit              : N/A
        Min Power Limit                   : N/A
        Max Power Limit                   : N/A
    Power Samples
        Duration                          : Not Found
        Number of Samples                 : Not Found
        Max                               : Not Found
        Min                               : Not Found
        Avg                               : Not Found
  1. there are many tritonserver logs, but from your configuration file, you will start triton native. did you succeed to run deepstream sample deepstream-ssd-parser ? here is the link: deepstream_python_apps/apps/deepstream-ssd-parser at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub
  2. you can refer to deepstream onnxruntime sample, the path: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/source1_primary_classifier.txt, please set config-file=config_infer_primary_classifier_densenet_onnx.txt

Hello,

1. As for the sample deepstream-ssd-parser
Here’s how I ran the sample:

python3 deepstream_ssd_parser.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_1080p_h264.mp4 

There are absolutely no errors, but nothing is processing either, nvidia-smi shows no use of GPU what so ever and the output from the pipeline (out.mp4) is 0 bytes in size.

And here’s the log:

Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_1080p_h264.mp4 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
I0904 10:11:15.922192 361 libtorch.cc:1309] TRITONBACKEND_Initialize: pytorch
I0904 10:11:15.922209 361 libtorch.cc:1319] Triton TRITONBACKEND API version: 1.8
I0904 10:11:15.922212 361 libtorch.cc:1325] 'pytorch' TRITONBACKEND API version: 1.8
2022-09-04 10:11:15.991721: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-09-04 10:11:16.016621: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
I0904 10:11:16.016660 361 tensorflow.cc:2176] TRITONBACKEND_Initialize: tensorflow
I0904 10:11:16.016670 361 tensorflow.cc:2186] Triton TRITONBACKEND API version: 1.8
I0904 10:11:16.016673 361 tensorflow.cc:2192] 'tensorflow' TRITONBACKEND API version: 1.8
I0904 10:11:16.016677 361 tensorflow.cc:2216] backend configuration:
{"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.400000"}}
I0904 10:11:16.035775 361 onnxruntime.cc:2319] TRITONBACKEND_Initialize: onnxruntime
I0904 10:11:16.035795 361 onnxruntime.cc:2329] Triton TRITONBACKEND API version: 1.8
I0904 10:11:16.035799 361 onnxruntime.cc:2335] 'onnxruntime' TRITONBACKEND API version: 1.8
I0904 10:11:16.035801 361 onnxruntime.cc:2365] backend configuration:
{}
I0904 10:11:16.046240 361 openvino.cc:1207] TRITONBACKEND_Initialize: openvino
I0904 10:11:16.046253 361 openvino.cc:1217] Triton TRITONBACKEND API version: 1.8
I0904 10:11:16.046256 361 openvino.cc:1223] 'openvino' TRITONBACKEND API version: 1.8
I0904 10:11:16.100710 361 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f348e000000' with size 268435456
I0904 10:11:16.100856 361 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0904 10:11:16.101228 361 server.cc:524] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0904 10:11:16.101265 361 server.cc:551] 
+-------------+-------------------------------------------------------------------------+------------------------------------------------------------------------------+
| Backend     | Path                                                                    | Config                                                                       |
+-------------+-------------------------------------------------------------------------+------------------------------------------------------------------------------+
| pytorch     | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so                 | {}                                                                           |
| tensorflow  | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so         | {"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.400000"}} |
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so         | {}                                                                           |
| openvino    | /opt/tritonserver/backends/openvino_2021_4/libtriton_openvino_2021_4.so | {}                                                                           |
+-------------+-------------------------------------------------------------------------+------------------------------------------------------------------------------+

I0904 10:11:16.101281 361 server.cc:594] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+

I0904 10:11:16.126865 361 metrics.cc:651] Collecting metrics for GPU 0: NVIDIA GeForce GTX 1650
I0904 10:11:16.127152 361 tritonserver.cc:1962] 
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                                                          |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                                                         |
| server_version                   | 2.20.0                                                                                                                                                         |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binar |
|                                  | y_tensor_data statistics trace                                                                                                                                 |
| model_repository_path[0]         | /opt/nvidia/deepstream/deepstream-6.1/samples/triton_model_repo                                                                                                |
| model_control_mode               | MODE_EXPLICIT                                                                                                                                                  |
| strict_model_config              | 0                                                                                                                                                              |
| rate_limit                       | OFF                                                                                                                                                            |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                                                      |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                                                                                       |
| response_cache_byte_size         | 0                                                                                                                                                              |
| min_supported_compute_capability | 6.0                                                                                                                                                            |
| strict_readiness                 | 1                                                                                                                                                              |
| exit_timeout                     | 30                                                                                                                                                             |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0904 10:11:16.127963 361 model_repository_manager.cc:997] loading: ssd_inception_v2_coco_2018_01_28:1
I0904 10:11:16.228394 361 tensorflow.cc:2276] TRITONBACKEND_ModelInitialize: ssd_inception_v2_coco_2018_01_28 (version 1)
I0904 10:11:16.231778 361 tensorflow.cc:2325] TRITONBACKEND_ModelInstanceInitialize: ssd_inception_v2_coco_2018_01_28_0 (GPU device 0)
W0904 10:11:17.128718 361 metrics.cc:427] Unable to get power limit for GPU 0. Status:Success, value:0.000000
2022-09-04 10:11:17.471737: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2499950000 Hz
2022-09-04 10:11:17.472385: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f33e74ea800 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-09-04 10:11:17.472433: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2022-09-04 10:11:17.473526: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2022-09-04 10:11:17.473798: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-09-04 10:11:17.474016: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f33ddb382c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-09-04 10:11:17.474031: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA GeForce GTX 1650, Compute Capability 7.5
2022-09-04 10:11:17.474312: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-09-04 10:11:17.474437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties: 
name: NVIDIA GeForce GTX 1650 major: 7 minor: 5 memoryClockRate(GHz): 1.68
pciBusID: 0000:01:00.0
2022-09-04 10:11:17.474455: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-09-04 10:11:17.474538: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-09-04 10:11:17.474564: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-09-04 10:11:17.474586: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-09-04 10:11:17.474634: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-09-04 10:11:17.474659: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-09-04 10:11:17.474680: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-09-04 10:11:17.474715: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-09-04 10:11:17.474819: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-09-04 10:11:17.474897: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0
2022-09-04 10:11:17.474917: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-09-04 10:11:17.474924: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212]      0 
2022-09-04 10:11:17.474929: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0:   N 
2022-09-04 10:11:17.474980: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-09-04 10:11:17.475080: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-09-04 10:11:17.475170: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1564 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5)
I0904 10:11:17.593195 361 model_repository_manager.cc:1152] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
W0904 10:11:18.128967 361 metrics.cc:427] Unable to get power limit for GPU 0. Status:Success, value:0.000000
W0904 10:11:19.130400 361 metrics.cc:427] Unable to get power limit for GPU 0. Status:Success, value:0.000000
2022-09-04 10:11:19.446272: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-09-04 10:11:20.612957: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
  1. As for the the deepstream onnxruntime sample, it worked okay
    Here’s how I ran the sample:
deepstream-app -c source1_primary_classifier.txt 

And here’s the log:

(deepstream-app:315): GLib-GObject-WARNING **: 10:09:19.589: g_object_set_is_valid_property: object class 'GstNvInferServer' has no property named 'input-tensor-meta'
0:00:00.149357517   315 0x555913d0e030 WARN           nvinferserver gstnvinferserver_impl.cpp:293:validatePluginConfig:<primary_gie> warning: Configuration file unique-id reset to: 1
2022-09-04 10:09:19.974893: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:1 initialized model: densenet_onnx

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:194>: Pipeline ready

WARNING from primary_gie: Configuration file unique-id reset to: 1
Debug info: gstnvinferserver_impl.cpp(293): validatePluginConfig (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie
** INFO: <bus_callback:180>: Pipeline running


**PERF:  FPS 0 (Avg)	
**PERF:  37.76 (37.69)	
**PERF:  30.01 (31.78)	
**PERF:  30.02 (31.01)	
**PERF:  29.96 (30.70)	
**PERF:  30.00 (30.54)	
^C** ERROR: <_intr_handler:140>: User Interrupted.. 

Quitting
App run successful

How can I proceed from here?

please use h264 file,
python3 deepstream_yolo.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.