Inference error while using tensorrt engine on jetson nano

Hello,
I am having the following error while running inference on a trt engine
The engine file is for object detection model ‘ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8’.

Error:
[TensorRT] ERROR: 2: [pluginV2DynamicExtRunner.cpp::execute::115] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed.)

Environment

TensorRT version: 8.0.1.6
onnx version: 1.10.2
Jetpack: 4.6

Relevant Files

I have uploaded my onnx and trt engine file for the model, and the script I am using for inference.
And I follow the code given in the link: https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorflow-onnx-and-tensorrt/ to convert onnx to trt file.

model.onnx (10.4 MB)
inference.py (5.2 KB)
model.trt (12.4 MB)

Please have a look.
Thank you.

Hi,
This looks like a Jetson issue. Please refer to the below samlples in case useful.

For any further assistance, we recommend you to raise it to the respective platform from the below link

Thanks!

1 Like

I’m also having the same issue. any useful solution for this?

You can follow up this question. I opened a new issue there.

Good luck