Hello,
I have dowloaded a custom caffe model models/bvlc_googlenet using python ./scripts/download_model_binary.py models/bvlc_googlenet.
On model caffe subdirectory models/bvlc_googlenet I have succefully dowloaded pretrained model bvlc_googlenet.caffemodel and deploy.prototxt.
Copied both files on TX2 board and tried to execute : nvgstiva-app using a modified .txt config file deploy.prototxt using bvlc_googlenet.caffemodel as model-file and deploy.prototxt as proto-file but execucution fail in segmentation fault.
I have tried also to execute /tegra_mutlimedia_api/sample/backend :
./backend 1 …/…/data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --trt-deployfile …/…/data/Model/GoogleNet_M/deploy.prototxt --trt-modelfile …/…/data/Model/GoogleNet_M/bvlc_googlenet.caffemodel --trt-forcefp32 0 --trt-proc-interval 1 -fps 10
but it fails in infinite loop running without showing anything.
How can I test a custom pretrained caffe model on TX2 ? Am I doing something worng ?
Any suggestion appreciated.
Thanxs in advance
Regards
Marco Gonnelli
Hi,
Could you verify your model with TensorRT first?
cp -r /usr/src/tensorrt/ .
cd tensorrt/samples/
make
cd ../bin/
./giexec --deploy=/path/to/prototxt --output=/name/of/output
./giexec --deploy=/path/to/prototxt --model=/path/to/caffemodel --output=/name/of/output
Thanks.
Hello AastaLL,
I run the command as you suggested :
nvidia@tegra-ubuntu:~/TensorRT/bin$ ./giexec --deploy=/home/nvidia/DeepStream-Samples/Model/GoogleNet/deploy.prototxt --output=prob
deploy: /home/nvidia/DeepStream-Samples/Model/GoogleNet/deploy.prototxt
output: prob
Input “data”: 3x224x224
Output “prob”: 1000x1x1
name=data, bindingIndex=0, buffers.size()=2
name=prob, bindingIndex=1, buffers.size()=2
Average over 10 runs is 8.93317 ms.
Average over 10 runs is 8.9419 ms.
Average over 10 runs is 8.9824 ms.
Average over 10 runs is 8.93671 ms.
Average over 10 runs is 8.94054 ms.
Average over 10 runs is 8.90284 ms.
Average over 10 runs is 9.05354 ms.
Average over 10 runs is 8.88276 ms.
Average over 10 runs is 8.91054 ms.
Average over 10 runs is 8.91122 ms.
./giexec --deploy=/home/nvidia/DeepStream-Samples/Model/GoogleNet/deploy.prototxt --model=/home/nvidia/DeepStream-Samples/Model/GoogleNet/bvlc_googlenet.caffemodel --output=prob
deploy: /home/nvidia/DeepStream-Samples/Model/GoogleNet/deploy.prototxt
model: /home/nvidia/DeepStream-Samples/Model/GoogleNet/bvlc_googlenet.caffemodel
output: prob
Input “data”: 3x224x224
Output “prob”: 1000x1x1
name=data, bindingIndex=0, buffers.size()=2
name=prob, bindingIndex=1, buffers.size()=2
Average over 10 runs is 8.96078 ms.
Average over 10 runs is 8.92912 ms.
Average over 10 runs is 8.91075 ms.
Average over 10 runs is 8.93428 ms.
Average over 10 runs is 8.90179 ms.
Average over 10 runs is 8.89913 ms.
Average over 10 runs is 9.02942 ms.
Average over 10 runs is 8.87244 ms.
Average over 10 runs is 8.9003 ms.
Average over 10 runs is 8.90663 ms.
Testing the network with Tensort RT seems ok , right ?
Any suggestion ?
Thanxs in advance
Regards
Marco Gonnelli
Hi,
Please remember to update all the related information for custom model.
Ex.
num-classes
model-file
proto-file
labelfile-path
…
Thanks.
Hello Aastalll,
Thanks for your reply…
following you suggestion I am reading DeepstreamAPI documentation, in particular Application Configuration Section.
In the table, in particular Primary GIE and Secondary GIE Group section, are listed key values and for each key his range of accepted values, but it’s isn’t much clear how single key refer to our network configuration.
Are there much documentation or samples about nvgstiva-app configuration file modification for use with a custom model network ?
The network we have dowloaded from Caffe Model Zoo is a GoogLeNet bvlc_googlenet trained with ILSVRC 2012 dataset.
Thanxs in advance
Regards
Marco Gonnelli
Hi,
Here is another DeepStream tutorial at GTC2018 for your reference:
http://on-demand.gputechconf.com/gtc/2018/presentation/s81047-introduction-to-deep-stream-sdk.pdf
Suppose you can update the configuration for custom model with this session in our doc:
Application Customization
Custom Open Model
For example:
• model-file=file:///home/nvidia/bvlc_googlenet.caffemodel
• proto-file=file:///home/nvidia/bvlc_googlenet.prototxt
…
Please let us know if any configuration you don’t know how to update.
Thanks.
Hi @m.gonnellli,
I have the same problem. I have the pre-trained caffe model on YOLOV3, So I have both .caffemodel and YOLOV3.prototxt file. Can you guide me to run on txt for deep stream?
do i need to compile caffe first?
@Aastalll,
when i run the said commands it shows following error.
littro@littro-desktop:~/tensorrt/bin$ ./giexec --deploy=/home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt --output=prob
&&&& RUNNING TensorRT.trtexec # /home/littro/tensorrt/bin/trtexec --deploy=/home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt --output=prob
[I] deploy: /home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt
[I] output: prob
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 3343:20: Message type “ditcaffe.LayerParameter” has no field named “upsample_param”.
[E] [TRT] CaffeParser: Could not parse deploy file
[E] Engine could not be created
[E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /home/littro/tensorrt/bin/trtexec --deploy=/home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt --output=prob
littro@littro-desktop:~/tensorrt/bin$ ./giexec --deploy=/home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt --model=/home/littro/tensorrt/samples/sampleGoogleNet/0930_iter_100000.caffemodel --output=prob
&&&& RUNNING TensorRT.trtexec # /home/littro/tensorrt/bin/trtexec --deploy=/home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt --model=/home/littro/tensorrt/samples/sampleGoogleNet/0930_iter_100000.caffemodel --output=prob
[I] deploy: /home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt
[I] model: /home/littro/tensorrt/samples/sampleGoogleNet/0930_iter_100000.caffemodel
[I] output: prob
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 3343:20: Message type “ditcaffe.LayerParameter” has no field named “upsample_param”.
[E] [TRT] CaffeParser: Could not parse deploy file
[E] Engine could not be created
[E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /home/littro/tensorrt/bin/trtexec --deploy=/home/littro/tensorrt/samples/sampleGoogleNet/yolov3.prototxt --model=/home/littro/tensorrt/samples/sampleGoogleNet/0930_iter_100000.caffemodel --output=prob