Hi everybody!
I trained a Tensorflow/keras (TF-version=2.3) model for image classification on my PC, saved it as a Saved_model and transferred it to the Jetson Nano for inferencing images. This works fine with the SavedModel.
I wanted to optimize the inference speed with the use of the TF-TRT User-guide (https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html)
I used the following code to optimize the Saved_Model:
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
input_saved_model_dir = ‘/home/daneto/TFtoTRT/my_model’
output_saved_model_dir = ‘/home/daneto/TFtoTRT/output_saved_model_dir’
converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)
When i try to use this new Tf-trt optimized model with my inference program i ran into some problems.
Things like model.summary() and model.predict aren’t working anymore
I get a AttributeError: ‘UserObject’ has no attribute ‘summary’ and ‘UserObject’ has no attribute ‘predict’.
Is it possible to use model.predict with the TF-TRT converted model or is this functionality lost with the conversion?
I my understanding TF-TRT gives me a optimized model in the SavedModel format that can be used just like the ‘original’ SavedModel - only that it is faster
My Jetson Nano is running L4T version 4.4.
I attached the code i use for inferencing the images.
InferenceImagesJetson.txt (781 Bytes)
I hope you can help me to get this running on the Jetson.
Kind regards
Daniel