Training GAN with RTX3060 as computing accelerator face that nan is always displayed

When I use the RTX3090 graphics card as my computing accelerator card for deep learning training, nan is always displayed.
This will not happen when just switching to TITAN xp and another keeping same.
The graphics card driver is 455.45.01. The framework is TensorFlow-gpu=2.1.0.
The deep learning codes is followed this project–https://github.com/clovaai/stargan-v2.
And I guess the graphics card driver is not suitable for 3090 as a computing accelerator. As we know that the new cuda driver light2.5 released recently but it had not suitable for all works that TITAN XP could do. Just like this project.

Please run gpu-burn for 10 minutes on the 3090 to check for hardware defects.

We just find this problem when we use RTX 3090 accelerate this GAN training. And we also test other project, the 3090 card performed well but with a strange phenomenon, it will spend about 10 to 20 minute to truly start to speed the training process.
So it means this card is fine without hardware defects. and It is Gigabyte Turbo Edition

3060 or 3090? The initial time it needs to start training points to your cuda kernels not being compiled for the compute capability of Ampere so it jit compiles which might or might not work.

So the phenomenon is caused by the JIT compile tech. And why it might work or might not work ? for compatibility?

You might want to ask that over at the cuda forum:
https://forums.developer.nvidia.com/c/accelerated-computing/cuda/cuda-programming-and-performance/7

OK, thanks for your answers.Hope your happy all the day.