TensorRT Python Runtime

Description

Python Runtime Error

Environment

TensorRT Version: tensorrt/7.0.0.11-cuda-10.2
GPU Type: Tesla V100-SXM2
Nvidia Driver Version: NVIDIA-SMI 450.102.04
CUDA Version: cuda/10.2.89
CUDNN Version: cudnn/8.0.0
Operating System + Version: NAME=“SuSE” VERSION=“15.0”
Python Version (if applicable): Python 3.8.5
TensorFlow Version (if applicable): tensorflow 2.2.0 gpu_py38hb782248_0
PyTorch Version (if applicable): torch 1.7.1
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Running :
import tensorrt
print(tensorrt.version)
assert tensorrt.Builder(tensorrt.Logger())
results in the following error:
7.2.2.3
*** Error in `python’: double free or corruption (!prev): 0x0000555558dc04d0 ***
Aborted

Hi @alinutzal,

Could please provide us more details of problem, provide issue reproduce scripts/command and model.

Thank you.

I also get the following:

import tensorrt as trt
assert trt.Builder(trt.Logger())
[TensorRT] ERROR: CUDA initialization failure with error 222. Please check your CUDA installation: CUDA Installation Guide for Linux
Traceback (most recent call last):
File “”, line 1, in
TypeError: pybind11::init(): factory function returned nullptr

Hi @alinutzal,

Are you still facing this issue.

Hello,
I’m facing the same problem.

Steps to reproduce:
import tensorrt
print(tensorrt.version)
7.2.3.4
exit()
double free or corruption (!prev)
Aborted (core dumped)

My setup is:
Ubuntu 18.04
Python 3.6.9
GeForce GTX 1080
Driver Version: 460.32.03
CUDA 11.2
TensorRT was installed by pip based on the following link instructions:
TensorRT PIP install

We were only able to got to work with CUDA 10.2 and tensorrt 7.0.0.11.

[TensorRT] ERROR: CUDA initialization failure with error 222. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Traceback (most recent call last):
  File "tensorrt_convert2trt.py", line 54, in <module>
    main(args)
  File "tensorrt_convert2trt.py", line 35, in main
    model_trt = torch2trt(model, [input_], max_workspace_size=1 << 30)
  File "/home/gioipv/workspaces/miniconda3/envs/eye_state/lib/python3.7/site-packages/torch2trt-0.3.0-py3.7.egg/torch2trt/torch2trt.py", line 519, in torch2trt
    builder = trt.Builder(logger)
TypeError: pybind11::init(): factory function returned nullptr

Helllo, I am newbie
In my case, i think my user was not in docker group for user docker on nvidia.
I use aws and GPU is Tesla T4. Tensorrt use docker on nvidia.
Because, after I added my user account into docker group with sudo usermod -aG docker $USER
I have to activate the changes to groups with newgrp docker every time to use Tensorrt. Maybe after reboot my aws, i dont have to command newgrp docker, but for now, this is my server, i can’t reboot it. I had to do it for fix this issues