Failed to set pipeline to PAUSED error, when integrating YOLO trt engine with Deepstream-5.0

I have Deepstream-5.0 docker instance with,
docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0/ nvcr.io/nvidia/deepstream:5.0.1-20.09-triton

And when i tried to integrate my YOLO tensorrt engine file with deepstream, i am getting the following error,

** ERROR: <main:655>: Failed to set pipeline to PAUSED
Quitting
App run failed

I am attaching the config files below. I am running, deepstream-app -c deepstream_app_config_yoloV3.txt inside the docker and seeing the above error…

deepstream_app_config_yoloV3.txt (2.4 KB) config_infer_primary_yolov3.txt (2.0 KB)

Hi @jazeel.jk ,
1, Could you share the setup info as other topic does?
2. I tried the docker, comamnd and the files, and then got below failure
# deepstream-app -c deepstream_app_config_yoloV3.txt
2021-01-25 14:21:31.566599: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2
** ERROR: <parse_labels_file:263>: Failed to open label file ‘/root/ds/yolov3_labels.txt’:No such file or directory
** ERROR: <parse_gie:1193>: Failed while parsing label file ‘/root/ds/yolov3_labels.txt’
** ERROR: <parse_gie:1210>: parse_gie failed
** ERROR: <parse_config_file:513>: parse_config_file failed
** ERROR: main:627: Failed to parse config file ‘deepstream_app_config_yoloV3.txt’
Quitting
App run failed

  1. if you can’t share the repo, please share the complete log.

Hi @mchi,
I am using deepstream-5.0 docker container in my linux laptop and i am having cuda 10.2. I have a yolo trained tensorrt engine file which is trained with nvidia transfer learning toolkit…
I want to integrate that trt engine with deepstream. I am using the config files shared above.
source type i given as 1, as i want to use my inbuilt webcam for the input video. And i have used sink type 2…

root@7bf5cf1e9d6f:/opt/nvidia/deepstream/deepstream-5.0/samples/streams# deepstream-app -c deepstream_app_config_yoloV3.txt
** ERROR: <main:655>: Failed to set pipeline to PAUSED
Quitting
App run failed
root@7bf5cf1e9d6f:/opt/nvidia/deepstream/deepstream-5.0/samples/streams# 

This is the complete log i am receiving. Any help would be appreciable. Thank you…

I have solved the previous issue… I was mounting a local volume instead of /tmp/.X11-unix:/tmp/.X11-unix
Moved back to mounting the default volume,

Now the error i am receiving is given below…

root@bdfcd5606345:/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/box_det# deepstream-app -c deepstream_app_config_yoloV3.txt
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_ARGUMENT: getPluginCreator could not find plugin BatchTilePlugin_TRT version 1
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/box_det/fp16/trt.engine
0:00:00.468874418   194 0x7fbde8002390 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/box_det/fp16/trt.engine failed
0:00:00.468912844   194 0x7fbde8002390 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/box_det/fp16/trt.engine failed, try rebuild
0:00:00.468936990   194 0x7fbde8002390 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Validator error: FirstDimTile_0: Unsupported operation _BatchTilePlugin_TRT
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.549996141   194 0x7fbde8002390 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

Could solve the issue by installing Tensorrt OSS inside the Deepstream docker as well…

1 Like