Model returns only NaN values on GTXA5000 but not on 1080TI

I have replaced a GTX 1080TI graphics card with a GTX A5000 in a desktop machine and reinstalled Ubuntu to upgraded from 16.04 to 20.04 in order to meet requirements. But now I can’t retrain or predict with our current model; When loading the model, Keras hangs for a very long time and all predicted results are NaN values. We use Keras 2.2.4 with tensorflow 2.1.0 and Cuda 10.1.243, which I installed using Conda and I have tried with different drivers: 470, 495 and 510.

If I put the old GTX 1080 TI card back in to the machine the code works fine.

Any idea of what can be wrong - can it be the case that the A5000 does not support the same models as an old 1080TI card?

Hi,

This doesn’t look like TensorRT issue. We recommend you to please post your concern on Tensorflow/Keras related platform.
If it’s related to TensorRT, could you please share version you’re using and issue repro script/model.

Thank you.

Sorry I misplaced the question.

Any way; it turned out that the issue was Cuda 10 does not support Ampere Architechture

I’ll Accept this as the aswer, since I don’t have permission to delete the question

1 Like