error with cmake tf_to_trt_image_classification (jetson tx2, ubuntu 16.04, cuda 9.0 , cudnn -7.1.5)

“” ******************below is the error screen ***********************

/home/nvidia/tf_to_trt_image_classification/src/uff_to_plan.cpp: In function ‘int main(int, char**)’:
/home/nvidia/tf_to_trt_image_classification/src/uff_to_plan.cpp:71:79: error: no matching function for call to ‘nvuffparser::IUffParser::registerInput(const char*, nvinfer1::DimsCHW)’
parser->registerInput(inputName.c_str(), DimsCHW(3, inputHeight, inputWidth));
^
In file included from /home/nvidia/tf_to_trt_image_classification/src/uff_to_plan.cpp:12:0:
/usr/include/aarch64-linux-gnu/NvUffParser.h:182:18: note: candidate: virtual bool nvuffparser::IUffParser::registerInput(const char*, nvinfer1::Dims, nvuffparser::UffInputOrder)
virtual bool registerInput(const char* inputName, nvinfer1::Dims inputDims, UffInputOrder inputOrder) = 0;
^
/usr/include/aarch64-linux-gnu/NvUffParser.h:182:18: note: candidate expects 3 arguments, 2 provided
src/CMakeFiles/uff_to_plan.dir/build.make:62: recipe for target ‘src/CMakeFiles/uff_to_plan.dir/uff_to_plan.cpp.o’ failed
make[2]: *** [src/CMakeFiles/uff_to_plan.dir/uff_to_plan.cpp.o] Error 1
CMakeFiles/Makefile2:160: recipe for target ‘src/CMakeFiles/uff_to_plan.dir/all’ failed
make[1]: *** [src/CMakeFiles/uff_to_plan.dir/all] Error 2
Makefile:83: recipe for target ‘all’ failed
make: *** [all] Error 2


can anyone advise here ,

do i need to manually download and install tensorrt ? (currently i have it with jetpack 3.3 )

so if need to install tensorrt with verison i will download and install ?
please advise

hello,

You do not need to separately download and install TRT. It comes with jetpack3.3.

Please reference https://docs.nvidia.com/jetpack-l4t/index.html#jetpack/3.3/install.htm

hi ,
i have not install manually tensorrt , but while trying to use uffparser it throws error and then i follow the below link to create frozen graph from tensorflow model and then to trt engine which i can use ,

can you please suggest , is this ok to follow ,

(my intention is to create a frozen graph from trained model via DIGITS (which has .ckpt and .index and .meta ) , which i can use to create trt engine and i run it on jetson tx2 and use its power .)

Hello,

tf_to_trt_image_classification repo uses TFTRT (not UFF) to convert a TF graph to TRT.
For TFTRT, you don’t need to convert the graph using UFF. TFTRT provides a conversion function that you should use. The example scripts in that repo use that function.

The tf_to_tr_image_classification I used with JetPack 3.2 - TensorRT 3 has a frozenToPlan routine in convert_plan.py. I believe it uses uff. Is there a new version released with JetPack 3.3 - TensorRT 4.

If so, where is it located?

I am running on a TX2.
I have a retrained model that works very well with TensorRT 3.
I am trying to upgrade to TensorRT 4.
I am told it is much faster.

hi dbusby,

i guess tensort 4 comes by default with jetpack 3.3 (latest one) .

i just checked using command - > dpkg -l | grep nvinfer

i am still stuck in middle of this process of using tensorrt. I have converted my model to Uff and moved to jetson tx2 . but then not sure how i i should use it as jetson tx2 does not support python api.

i have checked jetson inference git hub , but still in confuse to write c++ wrapper.

any idea you have how to use it. if you have any github or sample it will be helpfull .

thanks

Hi chandrakanta,

Using JetPack 3.2 and TensorRT 3 I took an existing inception-v3 image classifier and retrained it on my own set of images. The resulting model was converted to a plan using frozenToPlan in convert_plan.py.

First I used classify_image in tf_to_tr_image_classification/examples to verify result. Then I wrote a version that classified an entire directory.

The results are excellent.

I am trying to do the same with JetPack 3.3 and TensorRT 4, but the frozenToPlan fails.

I am still hoping for a response to where to locate a tf_to_tr_image_classification repo for JetPack3.3 - TensorRT 4, as the ones I find use UFF.

hi dbusby,

i am same page as you are , i have converted my model to UFF and checking how to use it in tensortRT4 (jetpack 3.3) on jetson .

i got a response from moderator that there is sampleUffMNIST.cpp which is sample example for converting to tensorrt engine .
i am still checking on this,not at all good at c++ . you can have a look at

tensorrt/samples/sampleUffMNIST/sampleUffMNIST.cpp

if i find anything i will update here . thanks

Hi, I met the same error as you described. How do you solve it eventually?

Is this error solved? I met the same error.

Hi,
I solved this problem by updating uff in TensorRF3 to uff in TensorRF4.

  1. sudo pip3 uninstall uff
  2. Download TensorRT-4.0.1.6. tar package
  3. sudo pip3 install TensorRT-4.0.1.6/uff/uff-0.4.0-py2.py3-none-any.whl

The problem is solved. But when you run:
python scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float

It will show another error:
Error:
import graphsurgeon as gs
ImportError: No module named ‘graphsurgeon’

Then I changed uff back to TensorRT3.0.4. Then convert_plan goes well.

I am wondering what I am doing??? Am I correct?

Thank you,
Chao

Hi,
I met the same mistake but with the command “make” during the built of the tf_to_trt_image_classification repository…(I’m just trying to follow the procedure…)

Have you all managed to build the repo before moving forward ?

Btw I’m on JetsonTX2 - Jetpack3.3 and I’m really struggling using TensorRT, I can’t find any clear procedure to start from any .pb file to convert to plan and then do inference with TRT engine…

Thanks in advance for your help

Little news for the interested persons:
https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification/issues/30
It erased my error.

Now when I try to use the convert_plan.py script I’m facing this error:

Using output node .
Converting to UFF graph
Traceback (most recent call last):
  File "scripts/convert_plan.py", line 71, in <module>
    data_type
  File "scripts/convert_plan.py", line 22, in frozenToPlan
    text=False,
  File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
    name="main")
  File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 76, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 53, in convert_tf2uff_node
    raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
NameError: name 'UffException' is not defined

I precise that I use a personal .pb file trained on my host computer using TF1.9.0 and I have TF-gpu provided as the official tensorflow version for TX2 (which is 1.9.0 too)…don’t know if it can be a problem).

Thanks