run sample_mnist ERROR: Cuda initialization failure with error 38.

Hi,

I got Error of “Cuda initialization failure with error 38” when I tried to run sample_mnist use DLA? What does this ERROR mean? Do I have to have GPU installed in my PC in order to run this sample in TensorRT?

Thank!

ubuntu 18.04
CUDA 10.0 toolkit
TensorRT 5.0

/usr/src/tensorrt/bin$ dpkg -l | grep TensorRT
ii graphsurgeon-tf 5.0.0-1+cuda10.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.0.0-1+cuda10.0 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.0.0-1+cuda10.0 amd64 TensorRT samples and documentation
ii libnvinfer5 5.0.0-1+cuda10.0 amd64 TensorRT runtime libraries
ii python-libnvinfer 5.0.0-1+cuda10.0 amd64 Python bindings for TensorRT
ii python-libnvinfer-dev 5.0.0-1+cuda10.0 amd64 Python development package for TensorRT
ii tensorrt 5.0.0.10-1+cuda10.0 amd64 Meta package of TensorRT
ii uff-converter-tf 5.0.0-1+cuda10.0 amd64 UFF converter for TensorRT package

/usr/src/tensorrt/bin$ sudo ./sample_mnist --useDLA=1
Building and running a GPU inference engine for MNIST
ERROR: Cuda initialization failure with error 38. Please check cuda installation: Installation Guide Linux :: CUDA Toolkit Documentation.

Xiaocheng

Hello,

CUDA Runtime API error 38 means no CUDA-capable device is detected.

For a full list CUDA supported GPUs, please reference CUDA GPUs - Compute Capability | NVIDIA Developer

Thanks for the quick reply. If I have CUDA-enabled GPU(such as Geforce RTX 20) installed in my PC, is it able to run the TensorRT sample_mnist using DLA?

Yes, assuming your hardware/software meets the TRT requirement:

please reference https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#gettingstarted

Thank you!

One thing to be aware of: you need to belong to the video Linux group to “see” the GPU.