Tensorflow inference of the very first image cost about 1min

Hi

I have a problem using TensorFlow on the Xavier board.

Loading of the model works well as on the x86 PC, but the inference of the very first image cost about 1min, and after that, the inference seems good only cost 1 sec.

How should do to accelerate the first image inference?

board: Xavier
SDK: JetPack 32.2
TensorFlow: 1.13.1

Thanks.

Hi,

Do you enable the TF-TRT option?
If yes, TensorRT need to compile the TF model in the first launch, which might take some time.

Thanks.

Hi, AastaLLL

TF-TRT isn’t enabled.

Thanks.

Hi,

It looks like this issue is specific to the TensorFlow package.
May I know which package do you use first?

Here are some TensorFlow wheel we built for the Jetson users:
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html

Thanks.

Hi, AastaLLL

TensorFlow, 1.13.1
SDK, 32.2
JetPack, 4.2

Thanks.

Hi,

May I know the source of your TensorFlow package?
Did you use our official package?

Thanks.

Hi,

I use the official package, refers
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#install

Thanks.

Hi,

I use the official package, refers
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#install

Thanks.

Hi,

This sounds like a TensorFlow issue.

Please noticed that TensorFlow doesn’t optimize for the Jetson platform.
Non-optimized memory allocation and resource arrange might lead to slower performance on Jetson.

Is TensorRT an option for you?
We always recommend our user to convert the model into TensorRT for a better performance.

Thanks.