TensorRT 3.0 RC now available with support for TensorFlow

Google released tensorflow lite, any comments on this product? I think tensorRT and lite both have similar functions.

Hello everyone,

I am currently trying to port a SegNet from pyTorch to TensorRT.
Since I want to run it on the Jetson, I have to use the C++ library for that and am just setting the raw weights.
One issue I am having is, that it seems that the IReduceLayer is not available in TensorRT3 for some reason and I can therefore not build an ArgMax.
My question is, if there is an alternative way to build an ArgMax-Layer?.

Help would be highly appreciated :)

Thank you in advance,

Jendrik

Hello everyone,

I am having a problem with converting tf model to uff model. I tried to convert tensorflow version of yolo2 available on the link: [url]https://github.com/thtrieu/darkflow[/url].

And I am getting this error:
[url]https://pastebin.com/thZhUtp5[/url]

Hello everyone,

I noticed that the tar package for Tensorrt3.0 RC on jetson platforms doesn’t have the python API. Most of the directories seem to be empty. Since this is the case. What is the release on the jetson platforms for? I must be missing something important. Please let me know. Thanks!!

Hi:

I looked at the release notes, it says that “The Inception v4 network models are not supported with this Release Candidate with FP16 on V100”. I am wondering if the inception-ResNet v1 and v2 can be supported with this release

Thanks

Hi Jendrik, using the TensorRT plugin API, you can implement custom layers. Here is an example of using TensorRT’s plugin API:

https://github.com/AastaNV/Face-Recognition

You may also be interested to know that FCN-Alexnet and similar derivative segmentation networks already work with TensorRT, see:

https://github.com/dusty-nv/jetson-inference#image-segmentation-with-segnet

Hi damilola_aleshinloye, the TensorRT Python API is not currently supported on Jetson because of a dependency on pyCUDA and Anaconda
(conda redistributable not available for aarch64). Please use the x86_64 tarball on PC for the Python API.

I would recommend exporting to UFF (i.e. from TensorFlow) on the PC and then load it on the Jetson using TensorRT’s new NvUffImporter C++ class.

Yes, I believe they are, and the restriction for v4 seems specific to Volta V100 discrete GPU. If you have issues with v1 and v2 on Jetson, let us know.

Hi ngsong, TensorRT is the optimal inferencing accelerator for GPU and Jetson. It is how you can achieve the best performance on NVIDIA hardware.

I also like that TensorRT UFF roadmap is supporting the major frameworks, more than just TensorFlow - including caffe, torch/pyTorch and others - so you can use one, optimal inferencing backend to support deploying models trained in a variety of host frameworks.

Hi again:

Thanks for your confirmation! I tried to convert a inception ResNet v1 model to UFF. Frist of all, I freeze the output and become a protobuf.

========================================================================================
import uff
uff_model = uff.from_tensorflow_frozen_model(“…/models/facenet/test/20170511-185253.pb”, [“embeddings”])

Bug:

Using output node embeddings
Converting to UFF graph
Traceback (most recent call last):
File “pbTOuff.py”, line 2, in
uff_model = uff.from_tensorflow_frozen_model(“…/models/facenet/test/20170511-185253.pb”, [“embeddings”])
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 103, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 75, in from_tensorflow
name=“main”)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 64, in convert_tf2uff_graph
uff_graph, input_replacements)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 51, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 32, in convert_layer
return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter_functions.py”, line 371, in convert_sum
return _reduce_helper(name, tf_node, inputs, uff_graph, func=“sum”, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter_functions.py”, line 363, in _reduce_helper
raise ValueError(“keep_dims not supported”)
ValueError: keep_dims not supported

Is there any way I can get rid of keep_dims error?

Thanks a lot

I solved it by commented out those three lines about keep_dims

Hello,

@dusty_nv, thanks for the quick reply, I am currently trying your suggestion. Thanks!

OK gotcha siyuew, thanks for posting your workaround.

Hi dusty_nv,

thank you very much for the pointer with the custom layers.
I will try to figure that one out :)
I am aware about your github and as well, that their are SegNets on it and it is a great help for me to figure out a lot of stuff about TensorRT, so definitely a big thanks for it :)
However, I want to port a SegNet Version based on Yolov2 with some extras here and there and I want to figure out TensorRT. That is why I am trying a lot of stuff on my own :)

Cheers,

Jendrik

Hi Jendrick, I see, that makes perfect sense. Let us know if you get a new version of SegNet ported!

The JetPack 3.2 Developer Preview has been released for Jetson TX2 with support for TensorRT 3.0 RC2.

Please use JetPack 3.2 for automated installation of TensorRT 3.0 RC2, as opposed to the method here.

@siyuew, @dusty_nv, I am facing the same keep_dims not supported ValueError. Can you guys please elaborate the workaround. Which three lines in which code did you comment exactly to get rid of this error.

Thanks a lot in advance.

Hi, aboggaram

Could you open a new topic to specify your question?
This will help other users to find the information they need more efficiently.

Thanks.

Sure, I have posted my request here.[url]Help needed while using Tensor RT 3 to create inference engine for facenet model. - Jetson TX2 - NVIDIA Developer Forums

Hi all,

I am very new to Jetson and TensorRT. I have installed everything on my host PC and connected my Jetson to it perfectly.

Suppose I have a caffee/Tensorflow model that I want to use with TensorRT3.0 on Jetson TX2 for optimized inference time. With TensorRT3.0, the python interface is available to export the model to TensorRT but how should I use this model on Jetson then?

Can please anyone tell me the steps how to do it?

Hi aakashnain, for loading Caffe model please use NvCaffeParser.h (sample here). For TensorFlow model, please use NvUffParser.h. See the sampleUffMNIST for an example of loading models with NvUffParser class (should be located under /usr/src/tensorrt/samples)