Cuda cudnn and visual studio 2022 enable gpu processing

I’m trying to build my first machine learning program in visual studio and i’m trying to get it to process my model using my gpu instead of cpu. I have visual studio 2019 and 2022, along with cuda 11.8 and cudnn 8.6. I’ve installed python 3.9 3.10 and 3.11, followed the instructions i found online here but still doesnt recognize my gpu which i have an rtx 3080? Any assistance pointing me in the right direction to get it working correctly would be greatly appreciated.

https://www.yodiw.com/install-tensorflow-cudavisual-studio-2022-in-windows-11-for-gpu-modelling/

Hi @mike11d11 ,
Please check the cudnn-cuda version compatibility from here.

Also check if your drivers are updated?

Thanks

Yes this is what i looked at and several other sources online. This is what i’ve got so far, i just can’t figure out what i’m doing wrong…

Windows 10 pro fresh install
Visual studio 2019 and 2022 community edition
Cuda 11.8
cudnnn 8.6.0.163
copy dll’s and alos create directory for the cudnn
made environment system variables
anaconda 3.9.12

What else could i be missing?

I run this in anaconda but still get False for available GPU’s when i have 2 rtx 3060ti’s isntalled.

python
import tensorflow as tf
tf.version
len(tf.config.list_physical_devices(‘GPU’))>0

Hi @mike11d11 ,
Can you please share the error logs with us?

ok this is what i ended up having to do in order to get it running with gpu on my machine.

python 3.8.5 downgrade
pip install tensorflow-gpu==2.7.1
pip install protobuf 3.20.1
pip install grpcio==1.48.2
pip install pandas --user
pip install scikit-learn
pip install pyodbc
pip install sqlalchemy
pip install numpy==1.21

I’m now seeing an issue that it will only process using 1 of my gpu’s, if I include both of my rtx 3060ti’s then i get this error below?

No OpKernel was registered to support Op ‘NcclAllReduce’ used by {{node Adam/NcclAllReduce}} with these attrs: [reduction=“sum”, shared_name=“c1”, T=DT_FLOAT, num_devices=2]
Registered devices: [CPU, GPU]