Hi,
I got Error of “Cuda initialization failure with error 38” when I tried to run sample_mnist use DLA? What does this ERROR mean? Do I have to have GPU installed in my PC in order to run this sample in TensorRT?
Thank!
ubuntu 18.04
CUDA 10.0 toolkit
TensorRT 5.0
/usr/src/tensorrt/bin$ dpkg -l | grep TensorRT
ii graphsurgeon-tf 5.0.0-1+cuda10.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.0.0-1+cuda10.0 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.0.0-1+cuda10.0 amd64 TensorRT samples and documentation
ii libnvinfer5 5.0.0-1+cuda10.0 amd64 TensorRT runtime libraries
ii python-libnvinfer 5.0.0-1+cuda10.0 amd64 Python bindings for TensorRT
ii python-libnvinfer-dev 5.0.0-1+cuda10.0 amd64 Python development package for TensorRT
ii tensorrt 5.0.0.10-1+cuda10.0 amd64 Meta package of TensorRT
ii uff-converter-tf 5.0.0-1+cuda10.0 amd64 UFF converter for TensorRT package
/usr/src/tensorrt/bin$ sudo ./sample_mnist --useDLA=1
Building and running a GPU inference engine for MNIST
ERROR: Cuda initialization failure with error 38. Please check cuda installation: Installation Guide Linux :: CUDA Toolkit Documentation.
Xiaocheng