Environment
**Nvidia card: Jetson AGX Xavier
TensorRT Version: 8.0.1.6
CUDA Version: 10.2.300
CUDNN Version: 8.2.1.32
Operating System + Version: Ubuntu 18.04
Do you have any idea how to solve this problem
Thanks
**Nvidia card: Jetson AGX Xavier
TensorRT Version: 8.0.1.6
CUDA Version: 10.2.300
CUDNN Version: 8.2.1.32
Operating System + Version: Ubuntu 18.04
Do you have any idea how to solve this problem
Thanks
Yes, the engine was generated on the same machine with same TensorRT version
I used the scripts in the following repository to convert my yolo model to TRT:
https://github.com/jkjung-avt/tensorrt_demos/tree/master/yolo
and the following repository for object detection detection
https://github.com/indra4837/yolov4_trt_ros
The error is with the following script
trt_yolo_v4.py (5.9 KB)
The previous script calls the following script for the inference
yolo_with_plugins.py (12.3 KB)
Dear @SivaRamaKrishnaNV
Do you have any idea about thIs error?
Is it a metter of multithreading ? the TRT and the inference in different Threads?
Dear @smaiah.sarah,
The error indicate issue with CUDA context.
Is it possible to check loading engine with trtexec to confirm no issue with engine.
Dear @smaiah.sarah,
The error in trtexec indicates, it has a non supported layers which are implemented like plugins.
Are you running the it like ROS node?
I would recommend to get the TRT model preparation/inference code from repo and test on target to see if it working correctly with out any issues.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.