I am planning to buy a laptop with Nvidia GeForce GTX 1050Ti or 1650 GPU for Deep Learning with tensorflow-gpu but in the supported list of CUDA enabled devices both of them are not listed.
Some people in the NVIDIA community say that these cards support CUDA can you please tell me if these card for laptop support tensorflow-gpu or not.
Link to the official list of CUDA enabled devices:
Both the GTX 1050ti and GTX 1650 support CUDA, and either is new enough to be supported by TensorFlow. The 1050ti has compute capability (CC) 6.1 and the 1650 has CC 7.5. Tensorflow currently requires CC 3.5. If you are planning to run training (rather than just inference), you will want to make sure the frame buffer is large enough to support your models of interest.
Thanks for the reply and clearence. Actually I asked this question from a customer support executive of NVIDIA and he said that these card doesn’t support tensorflow-gpu. So according to you I can install Tensorflow-gpu on a laptop with GTX 1050Ti or 1650 .
Please clearyfy further and if possible then please send me the links to check the compute capability [CC] of NVIDIA GPU cards
Thanks.
The 1050 Ti and 1650 have limited memory capacities (~4GB I believe) and as such will only be appropriate for some DL workloads. As such we do not recommend these GPUs for Deep Learning applications in general. Also, laptops are not generally designed to run intensive training workloads 24/7 for weeks on end.
That said, if your training task is reasonably small, these GPUs will certainly run TensorFlow.
Unfortunately, CUDA GPUs - Compute Capability | NVIDIA Developer needs to be updated. In the mean time, a list of compute capabilities is available at CUDA - Wikipedia
So, will gtx 1660ti or rtx 2060 suffice for larger workloads?
The 1660ti and 2060 with 6GB of memory will certainly be more flexible in addressing DL workloads than the 4GB 1050ti/1650. As points of reference, the professional-grade, server-class accelerators generally pack 16-32GB of memory while high-end desktop parts, like the 2080 or 1080Ti provide 11-12GB. Memory requirements are highly model-dependent. You will want to look at typical model sizes in your area of interest (or look at what hardware platforms reference models of interest to you have been train on).
Hello
I am trying to run some simple DL model in a Geforce GTX 1650
Is there a tutorial to achive this ?
Thanks in advance
Hi @hardolfo7,
You shouldn’t need to change your TF python scripts to start making use of GPUs. Since you are running on a laptop, I assume you’re GPU may also be used for rendering. In that case you may want to enable allow_growth option to keep TF from claiming too much of your GPU’s memory by default. See Use a GPU | TensorFlow Core
The tensorflow
pip packages for TF 2.1+ and 1.15 come with GPU support built in. If, however, you are running TF 2.0 or an older 1.x releaes you will want to install the tensorflow-gpu
package instead.
In order for TF to make use of your GPU you will also need to install the CUDA toolkit and CUDNN library. The versions you need depend on your TF version. Here are version lists for Linux and Windows packages.
If running Docker containers is an option, you can simplify the installation process by using a TensorFlow image from NVIDIA’s GPU Cloud registry. These provide TF prepackaged with the latest cudnn and toolkit.
I have a laptop with Nvidia GeForce GTX 1050Ti, and I couldn’t work with my gpu in tensorflow, So after several tries, I achieve it. How? Well, first you need to create a new environment, with a python version equal to 3.6, next, you need to install tensorflow-gpu 1.19 version. and I recommend you that you follow the instructions contained in
Setting up TensorFlow (GPU) on Windows 10 | by Peter Jang | Towards Data Science
but with the versions that I mentioned at first.
I atacched an image where we can see the correct behavior with this NVIDIA graphic target
Hello,
I am attempting to utilize my NVIDIA GTX 1050 Ti in TensorFlow, however TensorFlow does not appear to be recognizing my GPU despite installing the appropriate version of CUDA and cuDNN. Is there a solution to this issue?
Hi @ibrahimsabri.belimi1997, can you provide details of your environment (e.g., Windows or Linux version, NVIDIA driver version installed, what toolkit/cudnn version you are using) where you are installing TensorFlow from (NGC or dockerhub container, pip package, building from source?), along with the output of running nvidia-smi
on your system.
I would recommend using the NGC Docker containers if you are not already, as these come with all library dependencies pre-installed.
Is GTX 1650 has compute capability of 7.5 because in the following post it was mentioned as 6.5
When I try to run onnxruntime inference session with TensorrtExecutionProvider, I’m getting the following error.
2024-07-05 15:27:08.422792965 [E:onnxruntime:Default, tensorrt_execution_provider.h:84 log] [2024-07-05 09:57:08 ERROR] IPluginRegistry::getCreator: Error Code 4: API Usage Error (Cannot find plugin: LinearClassifier, version: 1, namespace:.)
2024-07-05 15:27:08.431726903 [E:onnxruntime:Default, tensorrt_execution_provider.h:84 log] [2024-07-05 09:57:08 ERROR] IPluginRegistry::getCreator: Error Code 4: API Usage Error (Cannot find plugin: Normalizer, version: 1, namespace:.)
2024-07-05 15:27:08.440084013 [E:onnxruntime:Default, tensorrt_execution_provider.h:84 log] [2024-07-05 09:57:08 ERROR] IPluginRegistry::getCreator: Error Code 4: API Usage Error (Cannot find plugin: ZipMap, version: 1, namespace:.)
2024-07-05 15:27:08.447611542 [E:onnxruntime:, inference_session.cc:2044 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:2149 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: float_input has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs
TLDR:
TensorrtExecutionProvider cannot able to provide the plugins LinearClassifier, Normalizer & ZipMap
Code: Onnxruntime Tutorial - Python API docs
My specification:
- OS: Ubuntu 22.04 x86_64
- GPU: Nvidia GeForce GTX 1650
- Driver version: 555.42.06
- Cuda version: 12.5.r12.5
- CuDNN version: 9.2.1
- TensorRT version: 10.2.0 (built from tarball)
- Onnxruntime version: 1.18.1
how do you even running epochs on 4gb of VRAM? did you do image detection or not?
For reference : I have a project where i detect a garbage using CNN. The size of the image is 224x224 and have 3 RGB. My workspace is GTX1050 4GB Mobile and 16GB of RAM.
Whenever i do model.fit it always says ResourceExhaustedError. I also saw my VRAM usage spiking whenever i do model.fit(). Please Help!