Unallocated GPU memory with Deep Learning Jupyter

I’m currently trying to train a deep learning model and for the past couple days I’ve had no GPU detected within my WSL2 environment at all. I’ve gone through all the manual installation processes for cuda, cudnn and tensorflow. My problem is that my environment now sees the GPU and says it’s available, however, nvidia-smi says there’s “N/A” memory allocated to python. Am I doing something wrong? I’ve entered my shell paths manually so shouldn’t be any issues there. When specificially running my network using the GPU using with tf.device my kernal crashes. (I’m on vs code running WSL if that helps)
Any advice would be greatly appreciated I’ve been stuck on this for days and I’m just a pleb student :/
image

Hi @jamesthorogood26 ,
Apologies for the delayed response, checking on this.