Hi I am trying to run the default Yolo model given inside the SDK but I am facing some issues.
When ever I run the default deepstream-app I’m getting the following error:
./deepstream-app: symbol lookup error: /root/deepstream_sdk_v4.0.2_x86_64/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so: undefined symbol: createLReLUPlugin
For further context, I had previously ran MaskRCNN given here which runs perfectly fine. However I did replace the file /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.6.0.1 with the newly generated one as suggested in the link above.
Before making the above mentioned changes I was able to run the YOLO model with the deepstream-app just fine. Replacing the old lib file with the newly generated one might have caused the issue.
For reference running a Telsa V100 and using the later Deepstream container from NGC “deepstream:4.0.2-19.12-devel”
For Yolo model, it does not need new libnvinfer_plugin.so from GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
I find there is no “ReluPlugin” in TensorRT 6.0 branch open source code “TensorRT/plugin”.
I am actually looking to run both the Yolo Model and Mask RCNN inside a single container. Is there any workaround to this issue which would allow me to run both without replacing the lib files?
Can you try to implement ReluPlugin in opensource “TensorRT/plugin”?
Another way:
This is for mrcnn process
$ LD_PRELOAD=<path-to-TRT-OSS-libnvinfer_plugin.so.6.0.1> deepstream-app -c
Once you use LD_PRELOAD, the default libnvinfer_plugin.so will not be picked up
Since both of them will have same SONAME in the lib
https://stackoverflow.com/questions/426230/what-is-the-ld-preload-trick
Yolo process also can work in default.
I was able to get both to run using the LD_PRELOAD trick. Thanks for the help!