RuntimeError: cannot get YoloLayer_TRT plugin creator

I’m trying to run the YoloV4 (Demo 5) in tensorrt_demos github repo (of jkjung-avt) on AWS ec2.

I created ec2 VM with nvidia-gpu (with AMI - Amazon Linux 2 AMI with NVIDIA TESLA GPU Driver),

which has: NVIDIA-SMI 450.119.01 Driver Version: 450.119.01 CUDA Version: 11.0.

On this EC2 I pulled and entered into the tensorrt official container, with:

sudo docker run --gpus all -it -v /home/ec2-user/player-detection:/home nvcr.io/nvidia/tensorrt:20.02-py3 bash

I did the following steps:

  1. Ran python3 -m pip install --upgrade setuptools pip && python3 -m pip install nvidia-pyindex && pip install nvidia-tensorrt.
  2. Inside the yolo/ folder, I ran:
    pip3 install -r requirements.txt.
  3. pip3 install onnx==1.9.0.
  4. Inside the plugins/ folder, I ran make.
  5. Inside the yolo/ folder, I ran ./download_yolo.sh && python3 yolo_to_onnx.py -m yolov4 && python3 onnx_to_tensorrt.py -m yolov4.

I got the following error for the python3 onnx_to_tensorrt.py -m yolov4 command:

 "RuntimeError: cannot get YoloLayer_TRT plugin creator"

From reading onnx to trt failed.. "cannot get YoloLayer_TRT plugin creator" · Issue #476 · jkjung-avt/tensorrt_demos · GitHub it seems that the problem is related to dynamic libaries.

I tried to view the libaries that I have, and got:

$ ldd libyolo_layer.so 
	linux-vdso.so.1 (0x00007fff142a4000)
	libnvinfer.so.7 => /usr/lib/x86_64-linux-gnu/libnvinfer.so.7 (0x00007f9673734000)
	libcudart.so.11.0 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudart.so.11.0 (0x00007f96734af000)
	libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f9673126000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f9672f0e000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9672b1d000)
	libcudnn.so.8 => /usr/lib/x86_64-linux-gnu/libcudnn.so.8 (0x00007f96728f4000)
	libmyelin.so.1 => /usr/lib/x86_64-linux-gnu/libmyelin.so.1 (0x00007f9672074000)
	libnvrtc.so.11.1 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libnvrtc.so.11.1 (0x00007f966feac000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f966fca4000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f966faa0000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f966f702000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f9699135000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f966f4e3000)
	libcublas.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcublas.so.11 (0x00007f9668008000)
	libcublasLt.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcublasLt.so.11 (0x00007f965a23e000)

It seems that I miss some, and also when I tried to print all the plugins, I didn’t see the YoloLayer_TRT.

Any idea how to solve it?

TensorRT plugins does not include YoloLayer_TRT
TensorRT/plugin at main · NVIDIA/TensorRT (github.com)
The author developed “yolo_layer” plugin which is under tensorrt_demos/plugins at master · jkjung-avt/tensorrt_demos (github.com),
you can read through the whole readme
jkjung-avt/tensorrt_demos: TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet (github.com)
or reach to the author.

thx, but maybe its gpu related? I know that it works for others on (e.g. rtx2070s). I’m trying to understand why on some environments it works, and on some it doesn’t.

It’s not GPU related.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.