ERROR: CUDA initialization failure with error 222 with C++

Description

I installed TensorRT, built the MNIST sample file TensorRT-7.2.3.4/samples/sampleMNIST/sampleMNIST.cpp with make 4.1

make TRT_LIB_DIR=[absolute-path-to]/TensorRT-7.2.3.4/lib CUDA_INSTALL_DIR=/usr/local/cuda CUDNN_INSTALL_DIR=/usr/local/cuda

and ran its executable

TensorRT-7.2.3.4/bin/sample_mnist

this problem occured

CUDA initialization failure with error 222. Please check your CUDA installation: CUDA Installation Guide for Linux

I have another project compiled and ran with PyTorch C++. It’s working fine.

here trtexec logs:

&&&& RUNNING TensorRT.trtexec # ./trtexec --verbose --onnx=resnet50.onnx
[05/19/2021-04:14:51] [I] === Model Options ===
[05/19/2021-04:14:51] [I] Format: ONNX
[05/19/2021-04:14:51] [I] Model: resnet50.onnx
[05/19/2021-04:14:51] [I] Output:
[05/19/2021-04:14:51] [I] === Build Options ===
[05/19/2021-04:14:51] [I] Max batch: explicit
[05/19/2021-04:14:51] [I] Workspace: 16 MiB
[05/19/2021-04:14:51] [I] minTiming: 1
[05/19/2021-04:14:51] [I] avgTiming: 8
[05/19/2021-04:14:51] [I] Precision: FP32
[05/19/2021-04:14:51] [I] Calibration:
[05/19/2021-04:14:51] [I] Refit: Disabled
[05/19/2021-04:14:51] [I] Safe mode: Disabled
[05/19/2021-04:14:51] [I] Save engine:
[05/19/2021-04:14:51] [I] Load engine:
[05/19/2021-04:14:51] [I] Builder Cache: Enabled
[05/19/2021-04:14:51] [I] NVTX verbosity: 0
[05/19/2021-04:14:51] [I] Tactic sources: Using default tactic sources
[05/19/2021-04:14:51] [I] Input(s)s format: fp32:CHW
[05/19/2021-04:14:51] [I] Output(s)s format: fp32:CHW
[05/19/2021-04:14:51] [I] Input build shapes: model
[05/19/2021-04:14:51] [I] Input calibration shapes: model
[05/19/2021-04:14:51] [I] === System Options ===
[05/19/2021-04:14:51] [I] Device: 0
[05/19/2021-04:14:51] [I] DLACore:
[05/19/2021-04:14:51] [I] Plugins:
[05/19/2021-04:14:51] [I] === Inference Options ===
[05/19/2021-04:14:51] [I] Batch: Explicit
[05/19/2021-04:14:51] [I] Input inference shapes: model
[05/19/2021-04:14:51] [I] Iterations: 10
[05/19/2021-04:14:51] [I] Duration: 3s (+ 200ms warm up)
[05/19/2021-04:14:51] [I] Sleep time: 0ms
[05/19/2021-04:14:51] [I] Streams: 1
[05/19/2021-04:14:51] [I] ExposeDMA: Disabled
[05/19/2021-04:14:51] [I] Data transfers: Enabled
[05/19/2021-04:14:51] [I] Spin-wait: Disabled
[05/19/2021-04:14:51] [I] Multithreading: Disabled
[05/19/2021-04:14:51] [I] CUDA Graph: Disabled
[05/19/2021-04:14:51] [I] Separate profiling: Disabled
[05/19/2021-04:14:51] [I] Skip inference: Disabled
[05/19/2021-04:14:51] [I] Inputs:
[05/19/2021-04:14:51] [I] === Reporting Options ===
[05/19/2021-04:14:51] [I] Verbose: Enabled
[05/19/2021-04:14:51] [I] Averages: 10 inferences
[05/19/2021-04:14:51] [I] Percentile: 99
[05/19/2021-04:14:51] [I] Dump refittable layers:Disabled
[05/19/2021-04:14:51] [I] Dump output: Disabled
[05/19/2021-04:14:51] [I] Profile: Disabled
[05/19/2021-04:14:51] [I] Export timing to JSON file:
[05/19/2021-04:14:51] [I] Export output to JSON file:
[05/19/2021-04:14:51] [I] Export profile to JSON file:
[05/19/2021-04:14:51] [I]
[05/19/2021-04:14:51] [I] === Device Information ===
[05/19/2021-04:14:51] [I] Selected Device: Tesla T4
[05/19/2021-04:14:51] [I] Compute Capability: 7.5
[05/19/2021-04:14:51] [I] SMs: 40
[05/19/2021-04:14:51] [I] Compute Clock Rate: 1.59 GHz
[05/19/2021-04:14:51] [I] Device Global Memory: 15109 MiB
[05/19/2021-04:14:51] [I] Shared Memory per SM: 64 KiB
[05/19/2021-04:14:51] [I] Memory Bus Width: 256 bits (ECC enabled)
[05/19/2021-04:14:51] [I] Memory Clock Rate: 5.001 GHz
[05/19/2021-04:14:51] [I]
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::Region_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::CropAndResize version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::Proposal version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::Split version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[05/19/2021-04:14:51] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[05/19/2021-04:14:51] [E] [TRT] CUDA initialization failure with error 222. Please check your CUDA installation: CUDA Installation Guide for Linux
[05/19/2021-04:14:51] [E] Builder creation failed
[05/19/2021-04:14:51] [E] Engine creation failed
[05/19/2021-04:14:51] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --verbose --onnx=resnet50.onnx

LD_LIBRARY_PATH:

/usr/local/cuda/lib64:[absolute-path-to]/TensorRT-7.2.3.4/lib

Environment

TensorRT Version: TensorRT-7.2.3.4
GPU Type: NVIDIA Tesla T4
Nvidia Driver Version: 450.119.03
CUDA Version: 11.1.1
CUDNN Version: 8.1.1
Operating System + Version: Ubuntu 18.04 LTS x86_64
Baremetal or Container (if container which image + tag): Baremetal
PyTorch C++: 1.8.1 CUDA 11.1
g++: 7.5.0
cmake: 3.10.2

Steps To Reproduce

I installed CUDA Toolkit 11.1.1 with Runfile:

sudo sh cuda_11.1.1_455.32.00_linux.run --silent --toolkit --toolkitpath=/usr/local/cuda-11.1.1 --override
sudo ln -s /usr/local/cuda-11.1.1 /usr/local/cuda

CUDNN 8.1.1:

tar -zvxf cudnn-11.2-linux-x64-v8.1.1.33.tgz
sudo cp cuda/include/cudnn*.h /usr/local/cuda-11.1.1/include
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-11.1.1/lib64
sudo chmod a+r /usr/local/cuda-11.1.1/include/cudnn*.h /usr/local/cuda-11.1.1/lib64/libcudnn*

TensorRT-7.2.3.4:

tar -zvxf TensorRT-7.2.3.4.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.1.tar.gz

Hi @caughtbypolice,

We request you to please make sure CUDA installed correctly. Can you please check and confirm if other CUDA application works fine.

Please checkout TensorRT NGC container to avoid system dependencies.
https://ngc.nvidia.com/containers/nvidia:tensorrt

Thank you.

I’m not sure but I’m still able to train and inference with Pytorch and Pytorch C++.
https://www.overleaf.com/project/60a33c766967b21bea6950ec/file/60a49d9cb54bfe9ad6510398

Hi @caughtbypolice,

Could you please check GPU utilization and confirm using nvidia-smi. Please make sure free GPU memory available and try to run sample_mnist.

Thank you.

Here. The first one i ran pytorch and pytorch c++. The second one is pytorch and sample_mnist, sample_mnist process broke right after that
https://www.overleaf.com/read/fdhkrqbpcsck
Thank you.

I upgraded driver version from 450 to 460. It’s working now. Thanks.

1 Like