Pipeline stuck when using nvinferserver

I am trying to replace nvinfer with nvinferserver in a deepstream code. But, when replacing the element the pipeline stuck upon running.

• Hardware Platform : GPU
• DeepStream Version : 6.0.1
• TensorRT Version : 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) : 470.103.01

The log is :

0:00:01.127045696 593738 0x7f6154440190 WARN           nvinferserver gstnvinferserver_impl.cpp:290:validatePluginConfig:<nvinfer-detector-person> warning: Configuration file unique-id reset to: 1
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
I0505 14:48:26.266519 593738 metrics.cc:290] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU
I0505 14:48:26.441111 593738 libtorch.cc:1029] TRITONBACKEND_Initialize: pytorch
I0505 14:48:26.441132 593738 libtorch.cc:1039] Triton TRITONBACKEND API version: 1.4
I0505 14:48:26.441138 593738 libtorch.cc:1045] 'pytorch' TRITONBACKEND API version: 1.4
2022-05-05 20:18:26.546768: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0

which app are you testing? what is your code modification and configuration file?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.