I have installed both Digits and DeepStream on my Linux system and have
managed to run all the DeepStream samples without any problems.
I have also created my own .h264 input file and that too works.
I then tried to build my own GoogleNet model using Digits, the model
appears to work inside Digits but when I try to use it with DeepStream
nothing happens. I get a few lines of Debug information sating it is using
FP32 then it stops, no errors no stack dumps nothing.
With the supplied model it outputs debug about using YUV420, with my
model it does not get that far.
I have noticed the first few lines of the supplied model deploy.txt file
and my one are different. My starts with a shape declaration whereas
the supplied model starts with a layer declaration.
As I am very new to this can someone tell me whether Digits is appropriate
for building models to be used in DeepStream? If so a link to an FAQ would
help or some suggestions as what parameters I should use in Digits when
creating the model. (I am using 224x224 jpegs to build the model).
Guess that you are facing a classification problem and run it with our NVDECINFER sample.
Before running the example, please make sure you have modified the sample to fit your custom model:
The problem maybe due to a wrong version of libnvinfer.so though.
libdeepstream requires so.3 and my system is using so.4 however
the example in the SDK appears to work with the wrong library version.
I will try and sort that out later.
My Digits model does work with the python example in Digits github repository
it also works with classifier.cpp example in the Caffe github repository.
I did what you requested but there was no source directory so I just called the binary with
./giexec --deploy=…/…/model/deploy.prototxt --output=softmax
I downloaded the .tar file and extracted the giexec example.
When I compile it I have to set cuda to 8.0, it then compiles
but I get the same error as before when I run it.
If I try to compile it with cuda-9.0 I get 2 linking errors.
/usr/lib/gcc/x86_64-linux-gnu/5/…/…/…/x86_64-linux-gnu/libnvinfer.so: undefined reference to cudnnSetConvolutionGroupCount' /usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/libnvinfer.so: undefined reference to cudnnGetConvolutionForwardAlgorithmMaxCount’
I just tried to compile it with cuda-9.1 and it compiles with a couple of warnings saying there is a conflict between using cublas8.0 and cublas 9.1
when I run it I get a core dump.
My computer updated itself recently to cuda-9.1 so I suspect I will have to wait until all the
other things are updated to work with cuda-9.1 rather than trying to set up all the correct versions of the libraries.
TensorRT support CUDA 8.0 and CUDA 9.0, but not CUDA 9.1 yet.
Please make sure you have downloaded the corresponding package.
The links are different for the CUDA version.
If you are using Debian installer, it can upgrade TensorRT directly.
If you are with the tarball package, please check this commend to setup CUDA/cuDNN/TensorRT.