problem on making NV-YOLO plugin

I am trying to install the deepstream-yolo-app following the instructions in:
https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md

I receive the following error when I try to make the NV-YOLO plugin for Tesla

/usr/bin/ld: cannot find -lnvdsgst_helper
/usr/bin/ld: cannot find -lnvdsgst_meta
collect2: error: ld returned 1 exit status
CMakeFiles/gstnvyolo.dir/build.make:103: recipe for target 'libgstnvyolo.so' failed
make[2]: *** [libgstnvyolo.so] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/gstnvyolo.dir/all' failed
make[1]: *** [CMakeFiles/gstnvyolo.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

I am using Deepstream SDK v3, TensorRT-5.0.2.6, Gstreamer v1.8.3

Hi,

After DeepStream installation, you should be able to find libnvdsgst_helper.so and libnvdsgst_meta.so in the /usr/local/deepstream/.
Could you help to check this first?

If you cannot find the library, please follow the README located at ${DeepStream_Release} to install SDK again.
Thanks.

I tried pulling the DeepStream docker image from NGC so the DS installation should be fine now. I am still experiencing problems since the docker’s CUDA version (10.1) does not match the expected CUDA version of the plugin (the make file depends exactly CUDA 10.0). Is there a way to use the nvyoloplugin in the DS docker image?
Moreover, Is there a docker image where both DS3 and TensorRT5 are built together?

Thanks

Hi,

Sorry that we don’t try this combination.
But there is no dependency between CUDA and yoloplugin.

You can update the Makefile based on your environment.
[url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/Makefile.config#L26[/url]

Thanks.