Does GTX 1050ti or 1650 for notebook support tensorflow-gpu

I am planning to buy a laptop with Nvidia GeForce GTX 1050Ti or 1650 GPU for Deep Learning with tensorflow-gpu but in the supported list of CUDA enabled devices both of them are not listed.
Some people in the NVIDIA community say that these cards support CUDA can you please tell me if these card for laptop support tensorflow-gpu or not.
Link to the official list of CUDA enabled devices:

1 Like

Both the GTX 1050ti and GTX 1650 support CUDA, and either is new enough to be supported by TensorFlow. The 1050ti has compute capability (CC) 6.1 and the 1650 has CC 7.5. Tensorflow currently requires CC 3.5. If you are planning to run training (rather than just inference), you will want to make sure the frame buffer is large enough to support your models of interest.

1 Like

Thanks for the reply and clearence. Actually I asked this question from a customer support executive of NVIDIA and he said that these card doesn’t support tensorflow-gpu. So according to you I can install Tensorflow-gpu on a laptop with GTX 1050Ti or 1650 .
Please clearyfy further and if possible then please send me the links to check the compute capability [CC] of NVIDIA GPU cards
Thanks.

The 1050 Ti and 1650 have limited memory capacities (~4GB I believe) and as such will only be appropriate for some DL workloads. As such we do not recommend these GPUs for Deep Learning applications in general. Also, laptops are not generally designed to run intensive training workloads 24/7 for weeks on end.

That said, if your training task is reasonably small, these GPUs will certainly run TensorFlow.
Unfortunately, https://developer.nvidia.com/cuda-gpus needs to be updated. In the mean time, a list of compute capabilities is available at https://en.wikipedia.org/wiki/CUDA#GPUs_supported

2 Likes

So, will gtx 1660ti or rtx 2060 suffice for larger workloads?

The 1660ti and 2060 with 6GB of memory will certainly be more flexible in addressing DL workloads than the 4GB 1050ti/1650. As points of reference, the professional-grade, server-class accelerators generally pack 16-32GB of memory while high-end desktop parts, like the 2080 or 1080Ti provide 11-12GB. Memory requirements are highly model-dependent. You will want to look at typical model sizes in your area of interest (or look at what hardware platforms reference models of interest to you have been train on).

Hello

I am trying to run some simple DL model in a Geforce GTX 1650

Is there a tutorial to achive this ?

Thanks in advance

Hi @hardolfo7,

You shouldn’t need to change your TF python scripts to start making use of GPUs. Since you are running on a laptop, I assume you’re GPU may also be used for rendering. In that case you may want to enable allow_growth option to keep TF from claiming too much of your GPU’s memory by default. See https://www.tensorflow.org/guide/gpu

The tensorflow pip packages for TF 2.1+ and 1.15 come with GPU support built in. If, however, you are running TF 2.0 or an older 1.x releaes you will want to install the tensorflow-gpu package instead.

In order for TF to make use of your GPU you will also need to install the CUDA toolkit and CUDNN library. The versions you need depend on your TF version. Here are version lists for Linux and Windows packages.

If running Docker containers is an option, you can simplify the installation process by using a TensorFlow image from NVIDIA’s GPU Cloud registry. These provide TF prepackaged with the latest cudnn and toolkit.

I made an env with the instructions of this video

Works perfectly, Thanks

I have a laptop with Nvidia GeForce GTX 1050Ti, and I couldn’t work with my gpu in tensorflow, So after several tries, I achieve it. How? Well, first you need to create a new environment, with a python version equal to 3.6, next, you need to install tensorflow-gpu 1.19 version. and I recommend you that you follow the instructions contained in

Setting up TensorFlow (GPU) on Windows 10 | by Peter Jang | Towards Data Science

but with the versions that I mentioned at first.
I atacched an image where we can see the correct behavior with this NVIDIA graphic target