TF-TRT failed to build Engine in JetsonNano

Hello, I am using a JetsonNano with JetPack 4.3, Tensorflow 2.3.1 and Tensorrt 7.1.3
I have a Keras model that i covnerted to a TF-TRT model

When performing inference on the model, I get the following error:

TF-TRT Warning: Engine creation for PartitionedCall/TRTEngineOp_0_0 failed. The native segment will be used instead. Reason: Internal: Failed to build TensorRT engine

During Inference i get:

W tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:629] TF-TRT Warning: Engine retrieval for input shapes: [[1,100,68,3]] failed. Running native segment for PartitionedCall/TRTEngineOp_0_0

what does it mean?

It seems like TRT is not building engines but the inference works the same.
I have performed the same inference on another PC (TF-2.4.1 and TRT 7.2) and I do not get this error. However, I have compared the inference results between the Keras and TF-TRT model and they are the same (both with the error on JetsonNano and without the error on PC)

Why are my results the same? How do I solve this? Thank you!

Hi,

JetPack4.3 contains TensorRT v6.0 rather than v7.1.3.
Do you use JetPack4.4.x or JetPack4.5?

If you try to install TensorRT v7.1.3 on a JetPack4.3 environment.
This won’t work due to the GPU driver dependency and might be the cause of your error.

Thanks.

I apologize, I have a JetPack 4.4.1 apparently.
Does TensorRT 7.2 work with it?

A small appendix on Classification and Time PERFORMANCE:

I ran the Keras (Tensorflow) model on the JetsonNano, and I get an inference time of about 250 ms/per image .

A TensorFlow-TensorRT model on the JetsonNano gives me an inference time of about 1.1ms (FP32 Precision) and 0.9ms (FP16 precision) /per image .

(images are 100x68x3)

And again, the accuracy performance on the recognition is unchanged. So you mentioned i am not receiving the benefits of TF-TRT acceleration, but experimentally I have a speed improvement of about 2 orders of magnitude? How is this? I wonder if this is maybe a bug, and that I actually am getting the acceleration despite the warning (not being presumptuous here, just a flow of thoughts). Or would it be even faster if i get the acceleration?

Waiting for your thoughts on this,

Roberto

Hi,

The log is harmless warning and doesn’t effect the accuracy.

Please noted that the TensorFlow->TensorRT conversion is applied in the layer-level.
It’s possible that some layer cannot be converted into TensorRT and fallback to TensorFlow instead.

You can find the supported layer in the document below:
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#tf-115-20

Thanks.