DeepStream SSD parser example stucks

I am trying to run DeepStream SSD parser example from deepstream_python_example . But, the code stucks after some time. I tried on two machines Tesla T4 4GB and V100S-8Q, it’s the same problem.

• Hardware Platform (GPU)
• DeepStream Version : 6.0
**• NVIDIA GPU Driver Version (valid for GPU only) : 470.103.01 **
• Issue Type( questions, new requirements, bugs) : error
**• How to reproduce the issue/What steps I followed? **

root@584c75c27f0a:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/apps/deepstream-ssd-parser# python3 deepstream_ssd_parser.py BigBuckBunny.mp4 
Creating Pipeline 
 
Creating Source
Creating H264Parser
Creating Decoder
Creating NvStreamMux
Creating Nvinferserver
Creating Nvvidconv
Creating OSD (nvosd)
Creating Queue
Creating Converter 2 (nvvidconv2)
Creating capsfilter
Creating Encoder
Creating Code Parser
Creating Container
Creating Sink
Playing file BigBuckBunny.mp4 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
I0414 17:17:12.172144 77 metrics.cc:290] Collecting metrics for GPU 0: GRID V100S-8Q
I0414 17:17:12.425650 77 libtorch.cc:1029] TRITONBACKEND_Initialize: pytorch
I0414 17:17:12.425693 77 libtorch.cc:1039] Triton TRITONBACKEND API version: 1.4
I0414 17:17:12.425698 77 libtorch.cc:1045] 'pytorch' TRITONBACKEND API version: 1.4
2022-04-14 17:17:12.523932: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
I0414 17:17:12.564985 77 tensorflow.cc:2169] TRITONBACKEND_Initialize: tensorflow
I0414 17:17:12.565027 77 tensorflow.cc:2179] Triton TRITONBACKEND API version: 1.4
I0414 17:17:12.565033 77 tensorflow.cc:2185] 'tensorflow' TRITONBACKEND API version: 1.4
I0414 17:17:12.565038 77 tensorflow.cc:2209] backend configuration:
{"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.400000"}}
I0414 17:17:12.566924 77 onnxruntime.cc:1970] TRITONBACKEND_Initialize: onnxruntime
I0414 17:17:12.566958 77 onnxruntime.cc:1980] Triton TRITONBACKEND API version: 1.4
I0414 17:17:12.566967 77 onnxruntime.cc:1986] 'onnxruntime' TRITONBACKEND API version: 1.4
I0414 17:17:12.584554 77 openvino.cc:1193] TRITONBACKEND_Initialize: openvino
I0414 17:17:12.584598 77 openvino.cc:1203] Triton TRITONBACKEND API version: 1.4
I0414 17:17:12.584604 77 openvino.cc:1209] 'openvino' TRITONBACKEND API version: 1.4
I0414 17:17:12.791346 77 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x10018000000' with size 268435456
I0414 17:17:12.791874 77 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0414 17:17:12.792960 77 server.cc:504] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0414 17:17:12.793030 77 server.cc:543] 
+-------------+-------------------------------+-------------------------------+
| Backend     | Path                          | Config                        |
+-------------+-------------------------------+-------------------------------+
| tensorrt    | <built-in>                    | {}                            |
| pytorch     | /opt/tritonserver/backends/py | {}                            |
|             | torch/libtriton_pytorch.so    |                               |
| tensorflow  | /opt/tritonserver/backends/te | {"cmdline":{"allow-soft-place |
|             | nsorflow1/libtriton_tensorflo | ment":"true","gpu-memory-frac |
|             | w1.so                         | tion":"0.400000"}}            |
| onnxruntime | /opt/tritonserver/backends/on | {}                            |
|             | nxruntime/libtriton_onnxrunti |                               |
|             | me.so                         |                               |
| openvino    | /opt/tritonserver/backends/op | {}                            |
|             | envino/libtriton_openvino.so  |                               |
+-------------+-------------------------------+-------------------------------+

I0414 17:17:12.793068 77 server.cc:586] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+

I0414 17:17:12.793146 77 tritonserver.cc:1718] 
+----------------------------------+------------------------------------------+
| Option                           | Value                                    |
+----------------------------------+------------------------------------------+
| server_id                        | triton                                   |
| server_version                   | 2.13.0                                   |
| server_extensions                | classification sequence model_repository |
|                                  |  model_repository(unload_dependents) sch |
|                                  | edule_policy model_configuration system_ |
|                                  | shared_memory cuda_shared_memory binary_ |
|                                  | tensor_data statistics                   |
| model_repository_path[0]         | /opt/nvidia/deepstream/deepstream-6.0/sa |
|                                  | mples/triton_model_repo                  |
| model_control_mode               | MODE_EXPLICIT                            |
| strict_model_config              | 0                                        |
| pinned_memory_pool_byte_size     | 268435456                                |
| cuda_memory_pool_byte_size{0}    | 67108864                                 |
| min_supported_compute_capability | 6.0                                      |
| strict_readiness                 | 1                                        |
| exit_timeout                     | 30                                       |
+----------------------------------+------------------------------------------+

I0414 17:17:12.794836 77 model_repository_manager.cc:1045] loading: ssd_inception_v2_coco_2018_01_28:1
I0414 17:17:12.895427 77 tensorflow.cc:2269] TRITONBACKEND_ModelInitialize: ssd_inception_v2_coco_2018_01_28 (version 1)
I0414 17:17:12.897961 77 tensorflow.cc:2318] TRITONBACKEND_ModelInstanceInitialize: ssd_inception_v2_coco_2018_01_28_0 (GPU device 0)
2022-04-14 17:17:12.899402: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2022-04-14 17:17:12.899540: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-04-14 17:17:12.899877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1666] Found device 0 with properties: 
name: GRID V100S-8Q major: 7 minor: 0 memoryClockRate(GHz): 1.597
pciBusID: 0000:06:00.0
2022-04-14 17:17:12.899932: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2022-04-14 17:17:12.900021: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-04-14 17:17:12.900109: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2022-04-14 17:17:12.900140: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2022-04-14 17:17:12.900180: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
2022-04-14 17:17:12.900211: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2022-04-14 17:17:12.900244: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-04-14 17:17:12.900313: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-04-14 17:17:12.900643: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-04-14 17:17:12.900900: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1794] Adding visible gpu devices: 0
W0414 17:17:14.176565 77 metrics.cc:395] Unable to get power limit for GPU 0: Success
W0414 17:17:14.176786 77 metrics.cc:410] Unable to get power usage for GPU 0: Success
W0414 17:17:16.178627 77 metrics.cc:395] Unable to get power limit for GPU 0: Success
W0414 17:17:16.178688 77 metrics.cc:410] Unable to get power usage for GPU 0: Success
2022-04-14 17:17:17.407553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-04-14 17:17:17.407619: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212]      0 
2022-04-14 17:17:17.407626: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0:   N 
2022-04-14 17:17:17.407854: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-04-14 17:17:17.408217: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-04-14 17:17:17.408524: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1082] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-04-14 17:17:17.408811: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3276 MB memory) -> physical GPU (device: 0, name: GRID V100S-8Q, pci bus id: 0000:06:00.0, compute capability: 7.0)
2022-04-14 17:17:17.427406: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f7141610e70 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-04-14 17:17:17.427568: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GRID V100S-8Q, Compute Capability 7.0
2022-04-14 17:17:17.430499: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2095050000 Hz
2022-04-14 17:17:17.430823: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f71388ddae0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-04-14 17:17:17.430864: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
I0414 17:17:17.622849 77 model_repository_manager.cc:1212] successfully loaded 'ssd_inception_v2_coco_2018_01_28' version 1
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:5 initialized model: ssd_inception_v2_coco_2018_01_28
W0414 17:17:18.180868 77 metrics.cc:395] Unable to get power limit for GPU 0: Success
W0414 17:17:18.180918 77 metrics.cc:410] Unable to get power usage for GPU 0: Success
2022-04-14 17:17:21.116870: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-04-14 17:17:22.485519: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11

The code stucks at these point.

It’s in both of the machines, so it’s not machine specific.

Hi @sandeep.yadav.07780 ,
This command need to use H264 file as input.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.