TensorRT 3 for tensorflow support

Hi,

I am currently working on a TensorFlow python algorithm of object detection.
My algorithm is compatible only with TensorFlow -1.0 version.
It takes 17 seconds to detect objects in one frame which makes my algorithm very slow.

From the below link -
https://devtalk.nvidia.com/default/topic/1019621/correct-way-of-deploying-a-tensorflow-model-on-tx2-/

It implies that TensorRT 3 helps in increasing the speed of TensorFlow algorithms.

I have JetPack 3.0 version that supports TensorRT 2.1 installed on my Jetson TX2 board.
Can I upgrade TensorRT 2.1 to TensorRT3 without installing JetPack 3.2?

Kindly help.

Thanks

Hi,

TensortRT 3 doesn’t support TensorFlow versions below 1.3.
They mentioned it in the release candidate document.

Best Regards
Krishna

Thanks for the response Krishna.
Yes I agree.

I will port my algorithm from Tensorflow version 1.0 to 1.3 .

My doubt is how to update TensorRT in JetPack 3.0 from version 2.1 to 3.0

Kindly help.

Thanks

Hi, I don’t know either. I made a fresh installation instead of an upgrade.

Have you checked the below link?

https://devtalk.nvidia.com/default/topic/1027301/jetson-tx2/jetpack-3-2-mdash-l4t-r28-2-developer-preview-for-jetson-tx2/

You can just select custom installation and choose ‘no action’ on everything but TensorRT3. Should work.

Be warned though, there is no TRT python support for Jetson TX platforms.

Hi,

Make some clarification here:

1. We expect user to use Tensorflow 1.3

2. The package of JetPack3.2 is built with CUDA 9.0.
If you prefer JetPack3.1 environment, please upgrade TensorRT 3 with this package:
https://developer.nvidia.com/nvidia-tensorrt3rc-download

3. On Jetson, only C+±based UFF parser is available. Please convert your TensorFlow model to UFF format first.

Thanks.

Is there a more up-to-date version of the deb install package linked here: https://developer.nvidia.com/nvidia-tensorrt3rc-download

I think I need UFF 0.2 for my tensorflow model, and I believe that package only includes UFF 0.1

For context: I’ve trained a model in TF and transferred it to UFF using the TensorRT 3 python API on a compatible machine. When running the uff model on a TX1 with TensorRT 3.0 RC (UFF 0.1) the output is incorrect. The UFF model works properly with the TensorRT 3 python API on the machine that generated the UFF. Please advise.

Hi,

Currently, TensorRT 3.0 GA is only available for x86 Linux user and download link is:
https://developer.nvidia.com/nvidia-tensorrt-download

Thanks.