NvCaffeParser.h missing in docker for Deepstream 5.0, objectDetector_Yolo

Hi,

I’m trying to run objectDetector_Yolo in Deepstream 5.0 docker container, but
make -C nvdsinfer_custom_impl_Yolo
fails with: fatal error: NvCaffeParser.h: No such file or directory

root@205cf433812a:/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo# make -C nvdsinfer_custom_impl_Yolo 
make: Entering directory '/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo'
g++ -c -o nvdsinfer_yolo_engine.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I../../includes -I/usr/local/cuda-10.2/include nvdsinfer_yolo_engine.cpp
In file included from nvdsinfer_yolo_engine.cpp:23:0:
../../includes/nvdsinfer_custom_impl.h:128:10: fatal error: NvCaffeParser.h: No such file or directory
 #include "NvCaffeParser.h"
          ^~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:51: recipe for target 'nvdsinfer_yolo_engine.o' failed
make: *** [nvdsinfer_yolo_engine.o] Error 1
make: Leaving directory '/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo'

I pull docker with:
docker pull nvcr.io/nvidia/deepstream:5.0-20.07-samples
Run it with:

docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0  nvcr.io/nvidia/deepstream:5.0-20.07-samples

Deepstream-app and deepstream-test1 were both build successfully. I don’t see the missing header anywhere inside the docker.

I’m using Tesla V100

How should I proceed?

please provide your setup info as other topics

Default docker setup that I have problems with:
• Hardware Platform (Jetson / GPU) GPU Tesla V100
• DeepStream Version 5.0-20.07-samples (docker image)
• TensorRT Version 7.0.0-1 (as is supplied within the container)
• NVIDIA GPU Driver Version (valid for GPU only) 418.87.00, CUDA Version: 10.2
• Sample application and the configuration file content objectDetector_Yolo with default config
• Reproduce steps make -C nvdsinfer_custom_impl_Yolo

My local settings used with the previous version that work just fine:
• Hardware Platform (Jetson / GPU) GPU Tesla V100
• DeepStream Version 4.0.2-1
• TensorRT Version libnvparsers6 6.0.1-1+cuda10.1 amd64 TensorRT parsers libraries
tensorrt 6.0.1.5-1+cuda10.1 amd64 Meta package of TensorRT
• NVIDIA GPU Driver Version (valid for GPU only) 418.87.00, CUDA Version: 10.1
• Sample application and the configuration file content • Reproduce steps as above

I just realized that objectDetector_Yolo works if I use nvcr.io/nvidia/deepstream:5.0-20.07-triton instead of the -samples as stated in the first post, but I don’t understand why

HI @do.kw,
As explained below in https://ngc.nvidia.com/catalog/containers/nvidia:deepstream., sample docker only for running the samples, for exmaple

 $ cd /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app
 $ deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt

Samples : The DeepStream samples container extends the base container to also include sample applications that are included in the DeepStream SDK along with associated config files, models and streams. It thereby provides a ready means by which to explore the DeepStream SDK using the samples. Not supported on A100 (deepstream:5.0-20.07-samples)

** Limitations : On Samples docker container some time Cuda failure:status=4 message is observed for deepstream-test1 application for T4 platform but the test should run fine.*

Well, isn’t Yolo using the same sample app? As per: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_C_sample_apps.html

All in all, I figured it out, but given samples docker contains that yolo config, as well as all the files to build nvds custom inference library and yet there is no way to run them, I think the description of the containers could be more detailed :)

Understood, we will consider to make the container more clearly.

Thanks!

1 Like

Hi,

Im having the same issue with the docker image for Jetson (deepstream-l4t) with Yolo.

../../includes/nvdsinfer_custom_impl.h:128:10: fatal error: NvCaffeParser.h: No such file or directory
 #include "NvCaffeParser.h"

Im working with “aaeon 8220” jetpack 4.4.
I’m trying with the “nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples” docker image, i tested with base and then it says that doesnt find “/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo”

Im missing something? isnt this the correct image?

thx

I managed to make it work.

Seems that while is in docker build the bind to cuda/tensor/etc… arent made until you run “docker run”. so it can not find “NvCaffeParser.h”.

Doned with docker run and everything worked.

1 Like