Tensorflow to UFF conversion: RNN

Hello, so I think I know the answer but wanted clarification

It says in the tensorrt docs that tensorrt supports many ops, RNNs (LSTM, GRU, RNN) included.

However, in the tensorflow conversion section it does not mention RNN as a supported tf conversion type. So if I understand this correctly (and this would explain the issue I am seeing when trying to convert my RNN TF model to uff) that TF RNN is not supported for conversion to UFF at this time?

Is it possible to use RNNs from Tensorflow?

Thanks for clarifying everything for me,

Andy

Hi,

YES. Currently, TensorFlow RNN is not supported by TensorRT.

An alternative is to handle the iterative loop by your own.
Here is a good example for your reference:
[url]https://github.com/NVIDIA-Jetson/JEP_ChatBot/blob/master/src/tensorNet.cpp#L124[/url]

Thanks.

Is there a timeline/estimation on TensorFlow RNN support in TensorRT?

Also, are there other examples available? I do not exactly see how that code is handling the RNN.
Is it instead possible to write a custom API hook for the TensorFlow RNN?

In that same project, https://github.com/NVIDIA-Jetson/JEP_ChatBot/blob/master/src/tf_to_uff/model.py

it looks like an LSTM has been explicitly written out (an encoder and a decoder), would writing my RNNs like this allow me to use the conversion tool available in TensorRT to convert the TF model to a UFF?

I’m still not 100% on how to use TensorFlow RNNs in TensorRT.

Google recently announced TensorRT integration with TensorFlow 1.7 (Google Developers Blog: Announcing TensorRT integration with TensorFlow 1.7), but this would still require using TensorFlow Serving for inference instead of TensorRT, which is not ideal (since putting TensorFlow Serving on a Jetson is not trivial as far as I can tell, however I have not found many examples and zero examples with the current TF version).

Thanks!

Hi,

Yes, an alternative is to implement RNN mechanism as ChatBot sample:
[url]https://github.com/NVIDIA-Jetson/JEP_ChatBot/blob/master/src/tensorNet.cpp#L145[/url]

This will cause slightly performance degradation due to multiple kernel launch.

For the latest question, some forum user has compiled a TensorFlow with TensorRT support for aarch64:
[url]https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-7-wheel-with-jetpack-3-2-/post/5248445/#5248445[/url]

Thanks.

Any news on this ?