Optimizing a Tensorflow SavedModel with TF_TRT on Jetson Nano

Hi everybody!

I trained a Tensorflow/keras (TF-version=2.3) model for image classification on my PC, saved it as a Saved_model and transferred it to the Jetson Nano for inferencing images. This works fine with the SavedModel.
I wanted to optimize the inference speed with the use of the TF-TRT User-guide (https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html)
I used the following code to optimize the Saved_Model:

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
input_saved_model_dir = ‘/home/daneto/TFtoTRT/my_model’
output_saved_model_dir = ‘/home/daneto/TFtoTRT/output_saved_model_dir’
converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

When i try to use this new Tf-trt optimized model with my inference program i ran into some problems.
Things like model.summary() and model.predict aren’t working anymore
I get a AttributeError: ‘UserObject’ has no attribute ‘summary’ and ‘UserObject’ has no attribute ‘predict’.

Is it possible to use model.predict with the TF-TRT converted model or is this functionality lost with the conversion?
I my understanding TF-TRT gives me a optimized model in the SavedModel format that can be used just like the ‘original’ SavedModel - only that it is faster

My Jetson Nano is running L4T version 4.4.
I attached the code i use for inferencing the images.
InferenceImagesJetson.txt (781 Bytes)

I hope you can help me to get this running on the Jetson.

Kind regards
Daniel

Hi,

Please noted that TensorRT model doesn’t support portability since the optimization is based on the GPU architecture.
You will need to do TF-TRT conversion directly on the Jetson Nano.

May I know what kind of TensorFlow model do you bring to Nano?
Is it the original TensorFlow model without TensorRT optimized.

Thanks.

Thank you for your answer!

The Model was build with Tensorflow 2.3 as a Keras sequential model and trained on my PC.
The TF-TRT conversion was done on the Jetson Nano with the code i showed in my first post.
I attache the code for the model and the model summary. The model is trained with Images of Skin diseases, belonging to 3 categories.
There was no TensorRT optimization done with the original model.
HAM_ImageClassification.txt (3.2 KB)


I hope you can tell me if the TF-TRT converted model is still usable with model.predict() from tf.keras.Model.

Thank you very much for your effort
Daniel

Hallo AastaLLL.

Did you have a look into it?
I would be very thankful for an update.

Thank you!

Hi.
I’m still curious for an answer.
Waiting without any response is a bit frustrating
Thank you!

Hi,

Really sorry for the late reply.

In general, TF-TRT should have the same support range as TensorFlow but with some acceleration from TensorRT.
So, if a model can be inferenced with TensorFlow, it should also be able to inference with TF-TRT.

Based on below log, the error seem occur from the different version of TensorFlow package:

‘UserObject’ has no attribute ‘summary’ and ‘UserObject’ has no attribute ‘predict’.

May I know which package do you installed on the Nano?
We do have some TensorFlow v2.3 package release recently:

Would you mind to align the TensorFlow version to be the same as your training environment?

Thanks and please let us know the following.

Hello.

The Jetson Nano L4T version 4.4 with Tensorflow Version 2.3.1.
I trained the model again on my DesktopPC which also runs TF 2.3.1.
But still i can’t make predictions like i can with my original saved model.
It seems i’m not the only one facing this problem:


Do you have an other suggestion on how to use the TF-TRT model to make prediction?
thank you.
Daniel

Ok i figured it out myself by now.
Things like model.summary and model.predict are keras functionalities.
But the model produced with TF-TRT isn’t a keras model anymore, so the keras commands don’t work anymore.
I found a useful example.


Happy Thanksgiving
Daniel