Rtx 3050 desktop cuda compatibility

Can the RTX 3050 desktop graphics card be used with “its GPU computing power” with tensorflow? since it does not appear in this link CUDA GPUs - Compute Capability | NVIDIA Developer, can I use the drivers of the portable version of the same graphics card to be able to work with it? How can I do it so that it can work with this limitation?

Yes, it can be used. Not all CUDA capable GPUs are in that list, unfortunately. If you install the CUDA toolkit, the drivers that are included will work with that GPU. You do not need to look for a special portable/laptop driver. Software install process-wise, there is no difference setting up that card vs. setting up e.g. a RTX3090. Follow the instructions in the install guide for your OS, carefully.

If I download the CUDA toolkit, will I be able to use it for machine learning?
Is only that program needed?

You’ll have to do more than just download the CUDA toolkit, although that or something like it will be a necessary step. I won’t be able to give you a tutorial here on how to run the machine learning stack of your choice on your system, but there are many forum posts on many forums that cover this topic.

With respect to the questions that are on-topic here, yes, you can use that GPU, yes you will need to install a GPU driver and CUDA (in some fashion) to be able to use that GPU for CUDA. Conceptually, and from a software process flow perspective, it would be no different than setting up any other RTX 30 series GPU.

Is the application on Windows?

How can I make it work with tensorflow on Windows with this limitation?

There is no limitation.

I’ve already indicated to you I won’t be able to give you a tutorial on setting up a ML stack here.

Where can I find a tutorial?

What is done in this case?

That looks like a tensorflow question. This isn’t really a tensorflow forum, or a pytorch forum. No, I don’t have further suggestions such as where to get a tutorial or where to ask this question, but the internet has quite a few questions like this. I think you will likely find answers with google searching.

I won’t be able to help with setting up a ML stack here.

suraj@suraj-Dell-G15-5520:~$ nvidia-smi
Thu May 2 21:01:09 2024
| NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 |
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
| 0 NVIDIA GeForce RTX 3050 … Off | 00000000:01:00.0 Off | N/A |
| N/A 48C P0 N/A / 80W | 13MiB / 4096MiB | 0% Default |
| | | N/A |

| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
| 0 N/A N/A 1179 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 4169 G /usr/lib/xorg/Xorg 4MiB |
suraj@suraj-Dell-G15-5520:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0

I have installed cuda 12.4 cudnn compitable with this cuda. But still tensorflow is not able to locate GPU.
python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
2024-05-02 20:54:42.118245: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-05-02 20:54:42.119384: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-02 20:54:42.142393: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-05-02 20:54:42.142677: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-02 20:54:42.563018: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-05-02 20:54:44.052979: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at linux/Documentation/ABI/testing/sysfs-bus-pci at v6.0 · torvalds/linux · GitHub
2024-05-02 20:54:44.072726: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at Install TensorFlow with pip for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…