Error: Could not get cuda device count (cudaErrorInitializationError) Failed to parse group property

Please provide complete information as applicable to your setup.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Tesla T4 Gpu
• DeepStream Version: 6.4
• TensorRT Version: * TensorRT 8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only): 535.104.12

I am using NVIDIA Tesla T4 GPU, what is the compatible torch version with it?When I do ‘nvidia-smi’ it showing correct output, but when i try to print GPU details with in a python script using ‘torch’ , it says
"UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at …/c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0

[2024-02-22 17:25:55,692: WARNING/ForkPoolWorker-2] CUDA is not available. Running on CPU."

i also have another machine with NVIDIA Tesla P100 ,what is the compatible torch version with it?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The compatibility of PyTorch with NVIDIA Tesla T4 and Tesla P100 GPUs depends on the CUDA version supported by the GPUs and the corresponding NVIDIA driver installed on your system. Both Tesla T4 and Tesla P100 GPUs are compatible with various versions of PyTorch, given that these GPUs support CUDA compute capability 6.0 and higher. The key is to match the PyTorch version with the CUDA version that is supported by your NVIDIA driver.

Alternatively, you can opt for the pre-existing Docker container available on NVIDIA NGC (PyTorch | NVIDIA NGC), allowing you to bypass the installation of the GPU driver and CUDA.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.