Different results from the same dataset @ 1050ti vs P100

My PC Env :
Window 10
Python 3.6.10 (anaconda envs)
tensorflow ==1.14.0 (and gpu version)

Nvidia driver 442.50 (GTX 1050ti)
Cuda 9.0
cuDnn for cuda9.0 win

My Server Env :
CentOS 7.4
Pyhton 3.6.10
tensorflow ==1.14.0 (and gpu version)

Nvidia driver 384.183 (tesla p100)
Cuda 9.0
cuDnn 9.0-linux-x64-v7.6.5.32

Problem
I tested CNN using keras from the same dataset and found that the accuracy of the train and validation is high (~90%) on the PC.

However,In server, accuracy is low (trrain ~80%, val ~60%).
Losses are also reduced together on PCs, but are reduced to different trends on servers.
Hardware acceleration is occurring normally, but the training results are very strange.

Attempts for Solutions

I have tried various versions of tensorflow, cuda on the server, but the results are still the same. The data set was also double checked, but it was still the same. I wonder if this happens because of the difference between GPU devices.

I checked with mnist data once more, but each result is 48% accuracy in the server and 98% in the PC.

Can you post instruction to reproduce your results? Also, have you looked through the output logs for error or warning messages?

When I check the reproduction again with the Mnist, the server and the PC come out with the same result. But when I uploaded the dataset again to the server and learned by CNN, it was exactly the same as the previous results. (Accuracy is low on server.)

and there are no error messages about the my network.

By “check reproduction again with the Mnist” do you mean that up uploaded pre-trained weights to the server and ran inference only? Or are there two training data sets here, Mnist (which trains to equal accuracy on server and desktop) and something else (that trains to better accuracy on desktop)?

Neural networks were taught in the same structure, and I didn’t use pre-trained weights. PC and server were taught in the same dataset from the beginning.

Sorry what do you mean by “the server and the PC came out with the same result”?

With mnist they both now achieve 98% accuracy, but with your production data set the PC achieves higher accuracy than the server?

Solved.
I was missing a stupid mistake.
Because it’s such a really stupid mistake, if anyone else comes to this thread with a similar error, I’ll let him/her know.