about deploying the caffe model inside Jeston TX2 using TensorRT

Hi, all,

I am new to the Jetson TX2 and TensorRT.

I go throug the "Hello AI World" and "Two Days to a Demo (DIGITS)" and I am trying to deploy a customed the caffe model in the Jetson TX2 by using TensorRT. 
But I get following error:

++++++++++++++++++++++++++++++++++++++++++++++++++
Exception Traceback (most recent call last)
in
1 network = “TextBoxes”
----> 2 net = jetson.inference.imageNet(network)

Exception: jetson.inference – imageNet invalid built-in network was requested
++++++++++++++++++++++++++++++++++++++++++++++++++++
So it looks like it does not support the models that do not include inside of “network” folder.

Can you please tell me how to deploy my caffe model in the TensorRT? adding the network name into the source file?
Thank you!

Hi,

Please refer to the following sample:
https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-custom.md

In order to handle all the custom model parameters you need to use “jetson.inference.imageNet(opt.network, sys.argv)” call in your python code instead of “jetson.inference.imageNet(network)”, similar to below sample code:
https://github.com/dusty-nv/jetson-inference/blob/master/python/examples/imagenet-console.py

Hope this will help you.

Thanks

Hi, SunilJB,

Thank you for your reply.

What I am trying to do here is to transform our detection model into TensorRT format and test the speed in the jetson TX2 board.

According to your links, I use the “jetson.inference.imageDet(opt.network, sys.argv)” for my case.

I have tested two public detection models that are built with caffe model for VOC dataset detection. I am not able to load neither of them into TensorRT.

For example, I am trying to test the TextBoxes model: “https://github.com/MhLiao/TextBoxes”. I create a folder “TextBoxes” inside of “data/networks” folder, I download the caffemodel, classes.txt and deploy.prototxt into this foler.
Then, I test them in the command line as:
$ ./detectnet-console /home//jetson-inference/data/images/cat_2.jpg /home//output_0.jpg --prototxt=$NET/deploy.prototxt --model=$NET/TextBoxes_icdar13.caffemodel –labels=$NET/classes.txt

I get following error info:

detectNet – loading detection network model from:
– prototxt /home//jetson-inference/data/networks/TextBoxes/deploy.prototxt
– model /home/
/jetson-inference/data/networks/TextBoxes/TextBoxes_icdar13.caffemodel
– input_blob ‘data’
– output_cvg ‘NULL’
– output_bbox ‘softmax’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels NULL
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 5.1.6
[TRT] loading NVIDIA plugins…
[TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[TRT] Plugin Creator registration succeeded - NMS_TRT
[TRT] Plugin Creator registration succeeded - Reorg_TRT
[TRT] Plugin Creator registration succeeded - Region_TRT
[TRT] Plugin Creator registration succeeded - Clip_TRT
[TRT] Plugin Creator registration succeeded - LReLU_TRT
[TRT] Plugin Creator registration succeeded - PriorBox_TRT
[TRT] Plugin Creator registration succeeded - Normalize_TRT
[TRT] Plugin Creator registration succeeded - RPROI_TRT
[TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home//jetson-inference/data/networks/TextBoxes/TextBoxes_icdar13.caffemodel.1.1.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading /home/
/jetson-inference/data/networks/TextBoxes/deploy.prototxt /home//jetson-inference/data/networks/TextBoxes/TextBoxes_icdar13.caffemodel
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
[TRT] mbox_loc: all concat input tensors must have the same dimensions except on the concatenation axis
[TRT] mbox_conf: all concat input tensors must have the same dimensions except on the concatenation axis
Caffe Parser: Invalid axis in softmax layer - Cannot perform softmax along batch size dimension and expects NCHW input. Negative axis is not supported in TensorRT, please use positive axis indexing
error parsing layer type Softmax index 95
[TRT] device GPU, failed to parse caffe network
[TRT] device GPU, failed to load /home/
/jetson-inference/data/networks/TextBoxes/TextBoxes_icdar13.caffemodel
detectNet – failed to initialize.
detectnet-console: failed to initialize detectNet

I tried to generate the engine file, ie.: extBoxes_icdar13.caffemodel.1.1.GPU.FP16.engine, wit the code:

import tensorrt.legacy as trt
from tensorrt.legacy.parsers import caffeparser
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)
OUTPUT_LAYERS = ['prob']
MODEL_PROTOTXT = '/home/***/jetson-inference/data/networks/TextBoxes/deploy.prototxt'
CAFFE_MODEL = '/home/***/jetson-inference/data/networks/TextBoxes/TextBoxes_icdar13.caffemodel'
engine = trt.utils.caffe_to_trt_engine(G_LOGGER,
                                       MODEL_PROTOTXT,
                                       CAFFE_MODEL,
                                       1,
                                       1 << 20,
                                       OUTPUT_LAYERS,
                                       trt.infer.DataType.FLOAT)

trt.utils.write_engine_to_file("/home/***/gen_TextBoxes.caffemodel.1.1.GPU.FP16.engine", engine)

But it is also failed with error information as:
+++++++++++++++++++++++++++++++++++++++++++++++++
AssertionError Traceback (most recent call last)
/usr/lib/python3.6/dist-packages/tensorrt/legacy/utils/init.py in caffe_to_trt_engine(logger, deploy_file, model_file, max_batch_size, max_workspace_size, output_layers, datatype, plugin_factory, calibrator)
351 try:
→ 352 assert(blob_name_to_tensor)
353 except AssertionError:

AssertionError:

During handling of the above exception, another exception occurred:

AssertionError Traceback (most recent call last)
in
7 1 << 20,
8 OUTPUT_LAYERS,
----> 9 trt.infer.DataType.FLOAT)
10
11 trt.utils.write_engine_to_file(“/home/***/gen_TextBoxes.caffemodel.1.1.GPU.FP16.engine”, engine)

/usr/lib/python3.6/dist-packages/tensorrt/legacy/utils/init.py in caffe_to_trt_engine(logger, deploy_file, model_file, max_batch_size, max_workspace_size, output_layers, datatype, plugin_factory, calibrator)
358 filename, line, func, text = tb_info[-1]
359
→ 360 raise AssertionError(‘Caffe parsing failed on line {} in statement {}’.format(line, text))
361
362 input_dimensions = {}

AssertionError: Caffe parsing failed on line 352 in statement assert(blob_name_to_tensor)
+++++++++++++++++++++++++++++++++++++++++++++++++

Can you provide me some hints to solve these problems?
Thank you very much!

Hi,
Could you please share the following information so we can better help?
Jetson, OS, hw versions
TensorRT version
Caffe version
Python version [if using python]
Any custom script/model that are used.

It seems to be some issue with deploy.prototxt file.
For now, please reference the steps described here: Release Notes :: NVIDIA Deep Learning TensorRT Documentation

Thanks

Hi, SunilJB,

Thank you for your reply.

I flashed the Jetson TX2 with latest sdkmanager, i.e.: "sdkmanager_0.9.14-4964_amd64.deb".
Jetson: Jetson TX2, Pascal GPU with 256 CUDA-cores; 64-bit NVIDIA Denver and ARM Cortex-A57 CPUs; 8 GB LPDDR4 Memory; 32GB eMMC 5.1 Flash Storage; Graphics: NVIDIA Tegra X2 (nvgpu)/integrated,

OS: Ubuntu 18.04 LTS, 64-bit

TensorRT version: 5.1.6.1-1+cuda10.0 

 Caffe version: I think I did not install the Caffe but I installed the Pytorch for Python3 when I installed the Jetpack. The pytorch version is: 1.1.0

 Python version: Python 3.6.8

 Custom scripts: like what I explained in last message, the scripts are:
   I am trying to test the TextBoxes model: “https://github.com/MhLiao/TextBoxes”. I create a folder “TextBoxes” inside of “data/networks” folder, I download the caffemodel, classes.txt and deploy.prototxt into this foler.

Then, I test them in the command line as:

$ ./detectnet-console /home/***/jetson-inference/data/images/cat_2.jpg /home/***/output_0.jpg --prototxt=$NET/deploy.prototxt --model=$NET/TextBoxes_icdar13.caffemodel –labels=$NET/classes.txt
 Thanks

Hi, guys,

Do you have any updates for my question?
Thank you!

Hi,

Sorry for the delayed response.
Can to try converting your model using “trt-exec” command to check model compatibility with jetson interface?

https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#trtexec

Thanks