We have the following model:
vg19 = vgg19.VGG19
base_model = vg19(include_top=False, weights='imagenet', input_shape=(160, 160, 3))
cnn = Sequential()
model = Sequential()
model.add(TimeDistributed(cnn, input_shape=(30, 160, 160, 3)))
With a fully frozen graph, the model is 85 kB. Is there any reason this model will not run on the Jetson Nano platform? From looking at the support matrix:
there is no mention of the TimeDistributed layer, which would indicate that it won’t compile for TensorRT. Does the Jetson platform fully support the Tensorflow framework or are there limitations?
Not all the TensorFlow layers are supported.
You can find the support matrix below:
Do you meet any error when converting it into TensorRT?
TimeDistributed layer is a composition of some basic operations.
Please check if all the basic operations are supported instead.
MaskRCNN also use the layer but it can be inferenced with TensorRT.
So it’s recommended to give it a try.
Thank you for the response. Let me clarify my question, does the Jetson Nano, not TensorRT, support all Tensorflow layers? Would I be able to run my model on the Jetson Nano as is or would it require me to compile to a new format? I haven’t been able to find any equivalent support matrix for Tensorflow, like there is for TensorRT. I’m trying to understand what the Jetson Nano supports from the Tensorflow framework as is.
You can use TensorFlow directly on Nano.
The prebuilt library can be installed with the following instructions:
In general, the usage is similar to desktop except for some memory issues due to Nano’s limited resources.
However, since TensorFlow is a relatively heavy library.
It’s more recommended to convert the model into TensorRT for performance.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.