Error while building deepstream_tlt_apps

Hardware Platform: GPU (Tesla T4)
DeepStream Version: 5.0
TensorRT Version: 7.0.0.11
NVIDIA GPU Driver Version: 440.33.01
Cuda Version: 10.2
Cudnn Version: 7.6.5
Ubuntu Version: 18.04

I installed deepstream and TensorRt and I’m trying to run some examples but I have a problem while trying to build the deepstream_tlt_apps:

make[1]: Entering directory ‘/home/rockefella09/deepstream_tlt_apps/nvdsinfer_customparser_dssd_tlt’
g++ -o libnvds_infercustomparser_dssd_tlt.so nvdsinfer_custombboxparser_dssd_tlt.cpp -I/opt/nvidia/deepstream/deepstream-5.0/
/sources/includes -I/usr/local/cuda-10.2/include -Wall -std=c++11 -shared -fPIC -Wl,–start-group -lnvinfer -lnvparsers -L/us
r/local/cuda-10.2/lib64 -lcudart -lcublas -Wl,–end-group
In file included from nvdsinfer_custombboxparser_dssd_tlt.cpp:14:0:
/opt/nvidia/deepstream/deepstream-5.0//sources/includes/nvdsinfer_custom_impl.h:128:10: fatal error: NvCaffeParser.h: No such
file or directory
#include “NvCaffeParser.h”
^~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:41: recipe for target ‘libnvds_infercustomparser_dssd_tlt.so’ failed
make[1]: *** [libnvds_infercustomparser_dssd_tlt.so] Error 1
make[1]: Leaving directory ‘/home/rockefella09/deepstream_tlt_apps/nvdsinfer_customparser_dssd_tlt’
Makefile:68: recipe for target ‘deepstream-custom’ failed
make: *** [deepstream-custom] Error 2

Can you paste your full command when you build the deepstream_tlt_apps?
BTW, which readme did you follow?

So I followed the instructions from here: GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
I executed the download part and the build just the second set of instructions, from export DS_SRC_PATH=…

And the command I used for building is just make and then I got that error.

1)Could you try to search NvCaffeParser.h in your device?
2)Refer to Error while running SSD provided in Deepstream

  1. Yes, I can find the NvCaffeParser.h in TensorRT/include.
  2. I followed the instructions from 2) and added the correct path in the Makefile but I still got the same error.
    Here’s my Makefile, maybe I missed something.

CFLAGS+= -I$(DS_SRC_PATH)/sources/includes -I/home/rockefella09/TensorRT-7.0.0.11/include

SRCS:= $(wildcard *.c)

INCS:= $(wildcard *.h)

PKGS:= gstreamer-1.0

OBJS:= $(SRCS:.c=.o)

CFLAGS+= pkg-config --cflags $(PKGS)

LIBS:= pkg-config --libs $(PKGS) -L/home/rockefella09/TensorRT-7.0.0.11/lib

LIBS+= -L$(LIB_INSTALL_DIR) -lnvdsgst_meta -lnvds_meta
-Wl,-rpath,$(LIB_INSTALL_DIR)

  1. That’s your head file under your TRT OSS folder only. Not sure when you were missing the head file. It should be available, normally at /usr/include/x86_64-linux-gnu/. How did you install TensorRT7? Can you share your step?
  2. Please modify the Makefile in each child folder too. That will solve your issue.
  1. I followed this: Installation Guide :: NVIDIA Deep Learning TensorRT Documentation and my TensorRT file is in /home/TensorRT-7.0.0.11. Also I do not have any files related to TensorRT in /usr/include/x86_64-linux-gnu/

  2. Yes that solved the problem, but it seams that my TensorRT installation is still broken because when I try to run demo:

    ./deepstream-custom -c pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264 -d

I get some errors which seems to be caused by TensorRT installation:

Now playing: pgie_frcnn_tlt_config.txt
0:00:02.346853731 2567 0x556a84e9e210 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 3 output network tensors.
0:00:13.238736309 2567 0x556a84e9e210 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /home/rockefella09/deepstream_tlt_apps/models/frcnn/faster_rcnn_resnet10.etlt_b1_gpu0_fp16.engine successfully
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input_image 3x272x480
1 OUTPUT kFLOAT proposal 300x4x1
2 OUTPUT kFLOAT dense_regress_td/BiasAdd 300x16x1x1
3 OUTPUT kFLOAT dense_class_td/Softmax 300x5x1x1

0:00:13.246974990 2567 0x556a84e9e210 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:pgie_frcnn_tlt_config.txt sucessfully
Running…
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
0:00:15.283189527 2567 0x556a8493e940 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:15.283214496 2567 0x556a8493e940 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(1946): gst_nvinfer_output_loop (): /GstPipeline:ds-custom-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Deleting pipeline

From your latest log, faster_rcnn_resnet10.etlt_b1_gpu0_fp16.engine is generated successfully.
Just meet the error " Internal data stream error".

Please set below and run again.
$ export DISPLAY=:0

Now it works (I have no error anymore and it seams like it’s running). I used the “-d” because I wanted to display the results, but without success (it hangs in Running…). I am using a cloud compute engine from GCP and I am connected to the display via remotedesktop hoping that I can see the predictions on the video but as I mentions it hangs.

If I remove the “-d” flag, it seems like it’s working till the end but I can’t figure where I can find the resulted video.

There is an output file: out.h264

I was blind, sorry.

Thank you for your help and patience.