Errors when building the YOLO plugin independently of DeepStream

This is regarding the YOLO plugin for DeepStream, provided at:

https://github.com/vat-nvidia/deepstream-plugins

I was successfully able to run it on a Tesla system, as a DeepStream plugin.

The README says that this can be run without DeepStream, as an independent app, trt-yolo-app. I was successful in running this as well, on the same setup (with DeepStream installed).

Now, I am looking to run trt-yolo-app, without installing DeepStream, and not having much luck. My setup is as follows:

  • Ubuntu 16.04, with a Tesla P100 GPU
  • Software installed: CUDA 10, TensorRT 5, OpenCV 3.4.1, cmake, glib2.0-dev

As per the instructions in the README.md,

  • Downloaded deepstream-plugins-master from the above gitHub.
  • Downloaded yolov3.cfg and yolov3.weights into the data/ folder
  • Selected the #define MODEL_V3 in network_config.h
  • Modified Makefile.config as follows:

#Update the install directory paths for dependencies below
CXX=g++
CUDA_VER:=10.0

#Set to TEGRA for jetson or TESLA for dGPU’s
PLATFORM:=TESLA

#For Tesla Plugins
OPENCV_INSTALL_DIR:= /usr/local
TENSORRT_INSTALL_DIR:= /usr
DEEPSTREAM_INSTALL_DIR:= /path/to/DeepStream_Release_2.0

#For Tegra Plugins
NVGSTIVA_APP_INSTALL_DIR:= /path/to/nvgstiva-app_sources

  • cd sources/plugins/gst-yoloplugin-tesla; make

The error message seen is:

Makefile:46: *** DEEPSTREAM_INSTALL_DIR variable is not set in Makefile.config. Stop.

Questions:

  • What changes are needed in the Makefiles to get this working?
  • Is it necessary to install the GStreamer pre-requisites?

Thanks.

Is there any reason why you would try to build a deepstream plugin if you want to use the standalone implementation ?

Please follow the instructions here to build and use trt-yolo-app
https://github.com/vat-nvidia/deepstream-plugins#building-and-running-the-trt-yolo-app

I don’t really need to build the DeepStream plugin.
However, I did try skipping this step, and directly proceeding to the section on deepstream-yolo-app.

cd sources/apps/deepstream-yolo make && sudo make install
cd ../../../ deepstream-yolo-app /path/to/sample_video.h264

When invoking make, I get the error:

Makefile:55: *** DEEPSTREAM_INSTALL_DIR variable is not set in Makefile.config. Stop.

I haven’t set the DEEPSTREAM_INSTALL_DIR variable, because DeepStream isn’t installed.
Please advise.

Thanks.

The deepstream-yolo-app is an app built on top the deepstream SDK and uses multiple DS plugins. You cannot use it without installing Deepstream and the yolo plugin. I think you should be looking at trt-yolo-app which works independently of deepstream.

Thanks, that’s a good point, which I ought to have realized right in the beginning.

In doing so, I now encounter the following error (when running make in the trt-yolo folder). Perhaps I ought to upgrade one of the components. Any suggestions which one?

I have TensorRT 5 and CUDA 10, and Opencv3.4.1

Thanks

The Make error is:

g++ -I"/usr/include/x86_64-linux-gnu" -I"/usr/local/cuda-10.0/include" -I “/usr/local/include” -c -o build/trt_utils.o -O2 -std=c++11 -lstdc++fs -ldl -fPIC -Wall -Wunused-function -Wunused-variable -Wfatal-errors -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include trt_utils.cpp
trt_utils.cpp: In function ‘nvinfer1::ILayer* netAddUpsample(int, std::map<std::__cxx11::basic_string, std::__cxx11::basic_string >&, std::vector&, int&, nvinfer1::ITensor*, nvinfer1::INetworkDefinition*)’:
trt_utils.cpp:624:69: error: ‘nvinfer1::MatrixOperation’ has not been declared
= network->addMatrixMultiply(*preM->getOutput(0), nvinfer1::MatrixOperation::kNONE, *input,
^
compilation terminated due to -Wfatal-errors.
Makefile:85: recipe for target ‘build/trt_utils.o’ failed
make[1]: *** [build/trt_utils.o] Error 1
make[1]: Leaving directory ‘/home/prashanth.bhat/deepstream-plugins-master/sources/lib’
Makefile:75: recipe for target ‘deps’ failed
make: *** [deps] Error 2

The app may be getting linked against an older version of TensorRT.

Please check the following,

  1. TENSORRT_INSTALL_DIR path pointing to TRT 5 directory
  2. ldd output to confirm the executable is being linked against TRT 5 libs

I’ve got it working now, after upgrading from TensorRT 5.0.0 to 5.0.2

Thanks again for all your help.