My image classification model has different performance on the board

Hello,

I have developed a custom model based on DenseNet-121 for a custom dataset. I trained it on a Linux computer using a GPU and I tested it there and I achieved there the following results:
Validation loss: 0.0027
Validation accuracy: 100%
The model does not overfit, as the training results are very good.

However, when I transferred that model to the nano board, I evaluate it on the board and I achieved:
Validation loss: 5.94642
Validation accuracy: 50.19%

The difference between the values is too high, which was unexpected. The code for evaluation is the same on both platforms. Could anyone please advise me on why this is happening and any possible solutions for that?

Hi,

How do you infer the model? Do you use TensorRT?

Thanks.

The code is on TensorFlow, but I am not using TensorRT.

I am new to this area, and I am not familiar with TensorRT.

Hi,

Could you share more details about the environment?

Is the version used between Jetson and dGPU identical?
How do you install the TensorFlow package on Nano?
Is any error/warning log shown when inference?

Thanks.

Hello,

The Python version I used to train the models is Python 3.10 and the Python on the nano board is 3.6.
For the TensorFlow package I followed this solution: Official TensorFlow for Jetson Nano! - #2 by blarish
I do not receive any unusual warnings when running the algorithm.

The only thing I changed between the two codes is the batch size, where in the GPU machine I used 32, and on the nano board I used 3. Other than that, only the Python version is different, which I am suspecting this might be the cause of the change in the performance.

Hi,

Have you tried our official TensorFlow package?
https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.