I want to test out DeepStream with the reference application in GitHub CaffeMNIST. I ran TensortRT 5.0 and DeepStream 4.0 as containers. Ive mounted a volume where the application resides.
I am not sure what path the application requires in the following command:
cmake -D DS_SDK_ROOT=<DS SDK Root> -D TRT_SDK_ROOT=<TRT SDK Root> -D CMAKE_BUILD_TYPE=Release …
The application in the volume can only have access to either DeepStream OR TensortRT but not both. So if I am accessing the application’s directory from within DS’s container, going to the volume, accessing the application, it will only have the path leading to DeepStream and not TensortRT. Vice versa with TensortRT. How can I run this application with DS and TRT running as containers with the appropriate paths?
Thank you in advance.
I suppose your app is https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/CaffeMNIST
“Set the DS_SDK_ROOT variable to point to your DeepStream SDK root directory and TRT_SDK_ROOT to TensorRT SDK root directory.”
Why you have to run this code? There are serveral samples in deepstream SDK package to study? What’s you require ?
The main use case for my application that I want to deploy in the future (and test with DS now) revolves around image recognition mainly. Additionally, when I looked at the YOLO reference application, it still involves running TensortRT and DeepStream together, and in my case, I have to run them as containers, the issue I am running to will still persist with other applications.
I tried to link TensortRT to the DeepStream container once its run, however, the application, residing in a volume that is mounted to the two containers, can only grab one path to the root of each container still, and not both.
Any suggestions or recommendations?
Hello,
I have similar doubt regarding running CaffeMNIST reference app on Xavier using DS 3.0
First of all tensorRT is installed using Jetpack 4.1 only so I cannot found TensorRT SDK separately and hence could not move with the pre-requisite step ‘Copy mnist_mean.binaryproto and mnist.caffemodel from the data/mnist directory in the TensorRT SDK to the CaffeMNIST/data directory’ since these both files are also not found anywhere.
In its absence what should I mention for TRT_SDK_ROOT path?
You don’t have to copy them to the specific folder. You just need to set the correct deeepstream cconfig path.
Suggest use deepstream 4.0
Go through all document. And run test1, test2, … You will solve your issue.
I have already ran sample apps test1 and test2 successfully. To run CaffeMNIST I am following the commands as instructed on Github readme file. When I run Make command, I encounter the following issue:
nvidia@jetson-0423018054743:~/Downloads/deepstream_reference_apps-master/CaffeMNIST/nvdsinfer_custom_impl_CaffeMNIST/build$ make
[ 33%] Building CXX object CMakeFiles/nvdsinfer_custom_impl_CaffeMNIST.dir/nvdsiplugin_CaffeMNIST.cpp.o
In file included from /home/nvidia/Downloads/deepstream_reference_apps-master/CaffeMNIST/nvdsinfer_custom_impl_CaffeMNIST/nvdsiplugin_CaffeMNIST.cpp:26:0:
/home/nvidia/Downloads/deepstream_reference_apps-master/CaffeMNIST/nvdsinfer_custom_impl_CaffeMNIST/factoryCaffeMNISTLegacy.h:39:10: fatal error: fp16.h: No such file or directory
#include "fp16.h"
^~~~~~~~
compilation terminated.
CMakeFiles/nvdsinfer_custom_impl_CaffeMNIST.dir/build.make:62: recipe for target 'CMakeFiles/nvdsinfer_custom_impl_CaffeMNIST.dir/nvdsiplugin_CaffeMNIST.cpp.o' failed
make[2]: *** [CMakeFiles/nvdsinfer_custom_impl_CaffeMNIST.dir/nvdsiplugin_CaffeMNIST.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/nvdsinfer_custom_impl_CaffeMNIST.dir/all' failed
make[1]: *** [CMakeFiles/nvdsinfer_custom_impl_CaffeMNIST.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Please guide me to rectify the issue so that I can successfully run CaffeMNIST app since it is to be used in my use case.
Did you build “https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/CaffeMNIST” ? This is based on deepstream 3.0. You just need to get model from this repo and deploy it in deepstream 4.0, write your own output parser. Refer to sources/libs/nvdsinfer_customparser