deepstream-plugins make failed

trt_utils.h: 35: 21: fatal error: opencv/dnn/dnn hpp: No such file or directory
compilation terminated.
Makefile:43: recipe for target 'build/calibrator.o' failed
Fnake: *** [build/calibrator.o] Error 1

getting this error

trying this code -

https://github.com/vat-nvidia/deepstream-plugins?ncid=em-ded-46185&mkt_tok=eyJpIjoiTkRFM1lqUmtOVEJoT1RSaSIsInQiOiI2VmdITE9IUzVJeVwvZGQ3YmI0SjhybDFWWDY4S1hIUHFlVkJMeXllQlFoUnZ3dDhMNU94eFNyYTU4diswa3B2RVpPSTZGUE9EcnVTYkMweXZLUEtTbkRKYmtROUYxdEZxRVNLMWJsWkRadlVYQ2RcL01iTHJmdnRSZVBFSitxNzRkIn0%3D

Jetson TX2
Jetpack 3.2
OpenCV 3.2.0 which comes with Jetpack
Cuda 9.0

Hi pharatekomal,

Which head version are you using?
We checking the latest head version, it need use TensorRT-4.0.
Are you building opencv with dnn support by yourself?

head - master

trt 4.0.4-1 + cuda9.0

Im not building opencv+dnn by myself. I’m using OpenCV installed with Jetpack installation.

Hi,

There are a couple of issues of this question:

1.
Our default OpenCV doesn’t support dnn module.
Please build it from source.

2.
This plugin requires TensorRT4.0, which is only available in JetPack3.3.
How do you get the TensorRT 4.0 work with JetPack3.2.
Please remember that there is some limitation between CUDA driver(within OS) and CUDA toolkit.
You may meet some dependency issue if you mix-use the package from different JetPack.

3.
Please also noticed that deepStream for Jetson is only available on JetPack3.2/3.2.1.
You may need to reset it to a previous commitment to make it compatible to TRT3.0.

Thanks.

  1. Okay. I had that doubt as I checked for the module in /usr/
    I will install it

  2. I have Jetpack 3.2.1. It came with TensorRT 4.0.

# R28 (release), REVISION: 2.1, GCID: 11272647, BOARD: t186ref, EABI: aarch64, DATE: Thu May 17 07:29:06 UTC 2018

TensorRT -

ii  libnvinfer-dev 4.0.4-1+cuda9.0   arm64   TensorRT development libraries and headers

This plugin requires TensorRT 4.0, which is only available in JetPack3.3.
Please also noticed that deepStream for Jetson is only available on JetPack3.2/3.2.1.

So I can’t use this plugin on Jetson?

Hi,

We are checking if this plugin can work with JetPack3.2 by fallback to the previous change.
Will update information with you later.

Thanks.

Okay. That will be great.
Thank you very much.

Hi,

This Yolo plugin was targeted for DS2.0 on Tesla and it is recently updated with a standalone TensorRT app to enable Jetson user.
You can find this information in our readme:
https://github.com/vat-nvidia/deepstream-plugins#note
[i]---------------------------------------------------
Note

Tegra users who are currently using Deepstream 1.5, please use the standalone TRT app as your starting point and incorporate that inference pipeline in your inference plugin.

[/i]
You can also use this code to write your own deepstream plugins for YOLO on Jetson.
Thanks.

Okay. Thanks for your reply.
Will try and let you know.

Tried.
But this cuda error is there.

In file included from network_config.cpp:26:0:
network_config.h:31:30: fatal error: cuda_runtime_api.h: No such file or directory
compilation terminated.
Makefile:62: recipe for target 'build/network_config.o' failed
make[1]: *** [build/network_config.o] Error 1
make[1]: Leaving directory '/home/nvidia/deepstream-plugins/sources/gst-yoloplugin/yoloplugin_lib'
Makefile:53: recipe for target 'deps' failed
make: *** [deps] Error 2

Looks like it was CUDA version issue.
Changed CUDA version variable. The bug is solved. Now getting error -

In file included from trt_utils.h:40:0,
                 from ds_image.h:28,
                 from calibrator.h:29,
                 from calibrator.cpp:26:
plugin_factory.h:72:23: error: ‘RegionParameters’ in namespace ‘nvinfer1::plugin’ does not name a type
     nvinfer1::plugin::RegionParameters m_RegionParameters{m_NumBoxes, m_NumCoords, m_NumClasses,
                       ^
Makefile:62: recipe for target 'build/calibrator.o' failed
make[1]: *** [build/calibrator.o] Error 1
make[1]: Leaving directory '/home/nvidia/deepstream-plugins/sources/gst-yoloplugin/yoloplugin_lib'
Makefile:53: recipe for target 'deps' failed
make: *** [deps] Error 2

Hi,

The TRT-yolo app requires TensorRT 4.0.

Here is my installation steps:
1. Flash device with JetPack 3.3

2. Install OpenCV 3.4.0
https://github.com/AastaNV/JEP/blob/master/script/install_opencv3.4.0.sh

3. Build TRT-yolo app

$ sudo apt-get install libgflags-dev
$ git clone https://github.com/vat-nvidia/deepstream-plugins.git
$ cd deepstream-plugins/

Apply this change

diff --git a/Makefile.config b/Makefile.config
index 1d73ce1..689d46e 100644
--- a/Makefile.config
+++ b/Makefile.config
@@ -23,7 +23,7 @@
 
 #Update the install directory paths for dependencies below
 CXX=g++
-CUDA_VER:=9.2
-OPENCV_INSTALL_DIR:= /path/to/opencv-3.4.0
-TENSORRT_INSTALL_DIR:= /path/to/TensorRT-4.0
-DEEPSTREAM_INSTALL_DIR:= /path/to/DeepStream_Release_2.0
\ No newline at end of file
+CUDA_VER:=9.0
+OPENCV_INSTALL_DIR:= ../../../OpenCV-3.4.0/opencv-3.4.0/release/
+TENSORRT_INSTALL_DIR:= /usr/lib/aarch64-linux-gnu/
+DEEPSTREAM_INSTALL_DIR:= /path/to/DeepStream_Release_2.0
diff --git a/sources/apps/TRT-yolo/Makefile b/sources/apps/TRT-yolo/Makefile
index f59173d..c607a42 100644
--- a/sources/apps/TRT-yolo/Makefile
+++ b/sources/apps/TRT-yolo/Makefile
@@ -39,7 +39,8 @@ CXXFLAGS:= -O2 -std=c++11 -lstdc++fs -Wall -Wunused-function -Wunused-variable -
 LIBS:= -L "$(TENSORRT_INSTALL_DIR)/lib" -lnvinfer -lnvinfer_plugin -Wl,-rpath="$(TENSORRT_INSTALL_DIR)/lib" \
        -L "/usr/local/cuda-$(CUDA_VER)/lib64" -lcudart -lcublas -lcurand -Wl,-rpath="/usr/local/cuda-$(CUDA_VER)/lib64" \
        -L "$(OPENCV_INSTALL_DIR)/lib" -lopencv_core -lopencv_imgproc -lopencv_imgcodecs -lopencv_highgui -lopencv_dnn -Wl,-rpath="$(OPENCV_INSTALL_DIR)/lib" \
-       -L "/usr/lib/x86_64-linux-gnu" -lgflags
+       -L "/usr/lib/x86_64-linux-gnu" -lgflags \
+       -I "/usr/local/cuda-$(CUDA_VER)/include/"
 
 .PHONY: all deps install clean

Remember to update the OpenCV installation folder

$ make

Thanks

Okay.
Looks fine.
Will check and update you.
Thank you once again.

Tried the AastaLLL comment.

Got the following error.

yoloplugin_lib.cpp:(.text+0xf00): undefined reference to `cv::dnn::experimental_dnn_v3::blobFromImages(std::vector<cv::Mat, std::allocatorcv::Mat > const&, double, cv::Size_, cv::Scalar_ const&, bool, bool)’
collect2: error: ld returned 1 exit status
Makefile:49: recipe for target ‘TRT-yolo-app’ failed

Looking into it.

Hi,

Please remember to update the OpenCV-3.4.0 path in the Makefile.config:
Ex.

OPENCV_INSTALL_DIR:= ../../../OpenCV-3.4.0/opencv-3.4.0/release/

Thanks.

Hi AastaLLL,

Thanks for the correction. Actually I was not full path to opencv3.4. I also tried with opencv3.4.3 and it works.

I am not able to debug the following error. It says the layer type is not supported.

File does not exist : /home/nvidia/deepstream-plugins/sources/gst-yoloplugin/yoloplugin_lib/models/yolov2-kFLOAT-batch1.engine
Unable to find cached TensorRT engine for network : yolov2 precision : kFLOAT and batch size :1
Creating a new TensorRT Engine
Loading pre-trained weights…
Loading complete!
Unsupported layer type --> ““button” aria-label=“Switch branches or tags” aria-expanded=“false” aria-haspopup=“true”>”
TRT-yolo-app: yolo.cpp:363: void Yolo::createYOLOEngine(int, std::__cxx11::string, std::__cxx11::string, std::__cxx11::string, nvinfer1::DataType, Int8EntropyCalibrator*): Assertion `0’ failed.
Aborted (core dumped)

Problem was with the cfg file.
Resolved.

Made it work. Thanks.

One more experiment I did was to load the Tiny weights and run the model.

For Tiny V2

File does not exist : /home/nvidia/deepstream-plugins/sources/gst-yoloplugin/yoloplugin_lib/models/yolov2-kFLOAT-batch8.engine
Unable to find cached TensorRT engine for network : yolov2 precision : kFLOAT and batch size :8
Creating a new TensorRT Engine
Loading pre-trained weights...
Loading complete!
      layer               inp_size            out_size       weightPtr
(1)   conv-bn-leaky     3 x 416 x 416      16 x 416 x 416    496   
(2)   maxpool          16 x 416 x 416      16 x 208 x 208    496   
(3)   conv-bn-leaky    16 x 208 x 208      32 x 208 x 208    5232  
(4)   maxpool          32 x 208 x 208      32 x 104 x 104    5232  
(5)   conv-bn-leaky    32 x 104 x 104      64 x 104 x 104    23920 
(6)   maxpool          64 x 104 x 104      64 x  52 x  52    23920 
(7)   conv-bn-leaky    64 x  52 x  52     128 x  52 x  52    98160 
(8)   maxpool         128 x  52 x  52     128 x  26 x  26    98160 
(9)   conv-bn-leaky   128 x  26 x  26     256 x  26 x  26    394096
(10)  maxpool         256 x  26 x  26     256 x  13 x  13    394096
(11)  conv-bn-leaky   256 x  13 x  13     512 x  13 x  13    1575792
(12)  maxpool         512 x  13 x  13     512 x  12 x  12    1575792
(13)  conv-bn-leaky   512 x  12 x  12    1024 x  12 x  12    6298480
(14)  conv-bn-leaky  1024 x  12 x  12     512 x  12 x  12    11019120
(15)  conv-linear     512 x  12 x  12     425 x  12 x  12    11237145
(16)  region          425 x  12 x  12     425 x  12 x  12    11237145
Output layers :
region_16
Building the TensorRT Engine...
Building complete!
Serializing the TensorRT Engine...
Serialized plan file cached at location : /home/nvidia/deepstream-plugins/sources/gst-yoloplugin/yoloplugin_lib/models/yolov2-kFLOAT-batch8.engine
Loading TRT Engine...
Loading Complete!
TRT-yolo-app: yolov2.cpp:40: YoloV2::YoloV2(uint): Assertion `m_OutputIndex != -1' failed.
Aborted (core dumped)

Edit: Made the region_16 as OutputBlobname. Compiled and ran, But No detection was made.

For Tiny V3

File does not exist : /home/nvidia/deepstream-plugins/sources/gst-yoloplugin/yoloplugin_lib/models/yolov3-kFLOAT-batch8.engine
Unable to find cached TensorRT engine for network : yolov3 precision : kFLOAT and batch size :8
Creating a new TensorRT Engine
Loading pre-trained weights...
Loading complete!
      layer               inp_size            out_size       weightPtr
(1)   conv-bn-leaky     3 x 416 x 416      16 x 416 x 416    496   
(2)   maxpool          16 x 416 x 416      16 x 208 x 208    496   
(3)   conv-bn-leaky    16 x 208 x 208      32 x 208 x 208    5232  
(4)   maxpool          32 x 208 x 208      32 x 104 x 104    5232  
(5)   conv-bn-leaky    32 x 104 x 104      64 x 104 x 104    23920 
(6)   maxpool          64 x 104 x 104      64 x  52 x  52    23920 
(7)   conv-bn-leaky    64 x  52 x  52     128 x  52 x  52    98160 
(8)   maxpool         128 x  52 x  52     128 x  26 x  26    98160 
(9)   conv-bn-leaky   128 x  26 x  26     256 x  26 x  26    394096
(10)  maxpool         256 x  26 x  26     256 x  13 x  13    394096
(11)  conv-bn-leaky   256 x  13 x  13     512 x  13 x  13    1575792
(12)  maxpool         512 x  13 x  13     512 x  12 x  12    1575792
(13)  conv-bn-leaky   512 x  12 x  12    1024 x  12 x  12    6298480
(14)  conv-bn-leaky  1024 x  12 x  12     256 x  12 x  12    6561648
(15)  conv-bn-leaky   256 x  12 x  12     512 x  12 x  12    7743344
(16)  conv-linear     512 x  12 x  12     255 x  12 x  12    7874159
(17)  yolo            255 x  12 x  12     255 x  12 x  12    7874159
(18)  route                  -            256 x  12 x  12    7874159
(19)  conv-bn-leaky   256 x  12 x  12     128 x  12 x  12    7907439
(20)  upsample        128 x  12 x  12     128 x  24 x  24        - 
ERROR: route_20: all concat input tensors must have the same dimensions except on the concatenation axis
TRT-yolo-app: trt_utils.cpp:295: std::__cxx11::string dimsToString(nvinfer1::Dims): Assertion `d.nbDims >= 1' failed.
Aborted (core dumped)

I am looking at the error right now

Hi,
Little delay in replying.

While doing -
Run the following command from sources/gst-yoloplugin/yoloplugin_lib to build and install the plugin
make && sudo make install

output of make is -

if [ ! -d "models" ]; then mkdir -p models; fi
if [ ! -d "calibration" ]; then mkdir -p calibration; fi
if [ ! -d "build" ]; then mkdir -p build; fi
if [ ! -d "detections" ]; then mkdir -p detections; fi
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/network_config.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  network_config.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/calibrator.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  calibrator.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/yolov2.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  yolov2.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/yoloplugin_lib.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  yoloplugin_lib.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/trt_utils.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  trt_utils.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/yolov3.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  yolov3.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/plugin_factory.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  plugin_factory.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/yolo.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  yolo.cpp
g++ -I"/usr/lib/aarch64-linux-gnu//include" -I"/usr/local/cuda-9.0/include" -I "/home/nvidia/opencv-3.4.0/release//include" -c -o build/ds_image.o -O2 -std=c++11 -lstdc++fs -fPIC -Wall -Wunused-function -Wunused-variable `pkg-config --cflags glib-2.0`  ds_image.cpp
/usr/local/cuda-9.0/bin/nvcc -c -o build/kernels.o -arch=compute_50 --shared -Xcompiler -fPIC  kernels.cu
ar rcs libyoloplugin.a  ./build/network_config.o  ./build/calibrator.o  ./build/yolov2.o  ./build/yoloplugin_lib.o  ./build/trt_utils.o  ./build/yolov3.o  ./build/plugin_factory.o  ./build/yolo.o  ./build/ds_image.o  ./build/kernels.o

but sudo make install gives -

make: *** No rule to make target 'install'.  Stop.

And then running TRT-yolo-app
I’m getting following error.

File does not exist : sources/gst-yoloplugin/yoloplugin_lib/models/yolov2-kFLOAT-batch1.engine
Unable to find cached TensorRT engine for network : yolov2 precision : kFLOAT and batch size :1
Creating a new TensorRT Engine
File does not exist : sources/gst-yoloplugin/yoloplugin_lib/data/yolov2.cfg
TRT-yolo-app: yolo.cpp:134: void Yolo::createYOLOEngine(int, std::__cxx11::string, std::__cxx11::string, std::__cxx11::string, nvinfer1::DataType, Int8EntropyCalibrator*): Assertion `fileExists(yoloConfigPath)' failed.

I have provided yolov2.cfg file and weights file as specified, in data folder

Small mistake done - was running TRT-yolo-app from wrong folder!

@pharatekomal
In network_config.cpp change the kDS_LIB_PATH to full path. Relative path is not working.

Yes I updated Small mistake done - was running TRT-yolo-app from wrong folder! Thats it.