hi ,
i have not install manually tensorrt , but while trying to use uffparser it throws error and then i follow the below link to create frozen graph from tensorflow model and then to trt engine which i can use ,
can you please suggest , is this ok to follow ,
(my intention is to create a frozen graph from trained model via DIGITS (which has .ckpt and .index and .meta ) , which i can use to create trt engine and i run it on jetson tx2 and use its power .)
tf_to_trt_image_classification repo uses TFTRT (not UFF) to convert a TF graph to TRT.
For TFTRT, you don’t need to convert the graph using UFF. TFTRT provides a conversion function that you should use. The example scripts in that repo use that function.
The tf_to_tr_image_classification I used with JetPack 3.2 - TensorRT 3 has a frozenToPlan routine in convert_plan.py. I believe it uses uff. Is there a new version released with JetPack 3.3 - TensorRT 4.
If so, where is it located?
I am running on a TX2.
I have a retrained model that works very well with TensorRT 3.
I am trying to upgrade to TensorRT 4.
I am told it is much faster.
i guess tensort 4 comes by default with jetpack 3.3 (latest one) .
i just checked using command - > dpkg -l | grep nvinfer
i am still stuck in middle of this process of using tensorrt. I have converted my model to Uff and moved to jetson tx2 . but then not sure how i i should use it as jetson tx2 does not support python api.
i have checked jetson inference git hub , but still in confuse to write c++ wrapper.
any idea you have how to use it. if you have any github or sample it will be helpfull .
Using JetPack 3.2 and TensorRT 3 I took an existing inception-v3 image classifier and retrained it on my own set of images. The resulting model was converted to a plan using frozenToPlan in convert_plan.py.
First I used classify_image in tf_to_tr_image_classification/examples to verify result. Then I wrote a version that classified an entire directory.
The results are excellent.
I am trying to do the same with JetPack 3.3 and TensorRT 4, but the frozenToPlan fails.
I am still hoping for a response to where to locate a tf_to_tr_image_classification repo for JetPack3.3 - TensorRT 4, as the ones I find use UFF.
i am same page as you are , i have converted my model to UFF and checking how to use it in tensortRT4 (jetpack 3.3) on jetson .
i got a response from moderator that there is sampleUffMNIST.cpp which is sample example for converting to tensorrt engine .
i am still checking on this,not at all good at c++ . you can have a look at
The problem is solved. But when you run:
python scripts/convert_plan.py data/frozen_graphs/inception_v1.pb data/plans/inception_v1.plan input 224 224 InceptionV1/Logits/SpatialSqueeze 1 0 float
It will show another error:
Error:
import graphsurgeon as gs
ImportError: No module named ‘graphsurgeon’
Then I changed uff back to TensorRT3.0.4. Then convert_plan goes well.
Hi,
I met the same mistake but with the command “make” during the built of the tf_to_trt_image_classification repository…(I’m just trying to follow the procedure…)
Have you all managed to build the repo before moving forward ?
Btw I’m on JetsonTX2 - Jetpack3.3 and I’m really struggling using TensorRT, I can’t find any clear procedure to start from any .pb file to convert to plan and then do inference with TRT engine…
Now when I try to use the convert_plan.py script I’m facing this error:
Using output node .
Converting to UFF graph
Traceback (most recent call last):
File "scripts/convert_plan.py", line 71, in <module>
data_type
File "scripts/convert_plan.py", line 22, in frozenToPlan
text=False,
File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
name="main")
File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 76, in convert_tf2uff_graph
uff_graph, input_replacements)
File "/home/nvidia/.virtualenvs/cv/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 53, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
NameError: name 'UffException' is not defined
I precise that I use a personal .pb file trained on my host computer using TF1.9.0 and I have TF-gpu provided as the official tensorflow version for TX2 (which is 1.9.0 too)…don’t know if it can be a problem).