Error Code 10: Internal Error (Could not find any implementation for node PWN(/model.0/act/Sigmoid).)

Problem Statement :

I am trying to convert an Yolov7 onnx model [yolov7-384x640-b1-silu.onnx - Google Drive] to tensorrt engine using the tensorrt.Builder with int8 caliberation. I am doing this on the two machines given below.

RTX3090 :

  • trt : 8.6.1
  • CUDA : 11.4

Jetson Orin Nano :

  • trt : 8.5.2-1+cuda11.4

Here are the steps that I am following.

Approch 1:

  1. Convert official file to onnx using the following script. [yolov7/ at main · WongKinYiu/yolov7 · GitHub] . I am running it with the following arguments.

    python --weights --grid --simplify --imgsz 384 640 --batch-size 1

  2. Then I am running the following script to do the tensorrt conversion with int8 caliberation. [TensorRT-For-YOLO-Series/ at main · Linaom1214/TensorRT-For-YOLO-Series · GitHub]

    python -o yolov7.onnx -e yolov7.trt --end2end -p int8 --calib_input /path/to/calib/image/dir/


  1. Convert to yolov7.onnx using the method mentioned above.

  2. Converting yolov7.onnx to yolov7.trt using trtexec WITHOUT int8 caliberation.

    /usr/src/tensorrt/bin/trtexec --onnx=yolov7.onnx --saveEngine=yolov7.engine --fp16 --int8


  1. Both Approach 1 and Approach 2 works well on my 3090 machine.
  2. On Jetson Orin Nano, Approach 1 will only work if we don’t do int8 caliberation, ie. it works fine for fp16 and fp32 trt models.
  3. Approach 2 works fine on both the machines.


I am getting this error [check the log.txt for full log] when doing Approach 1 on Jetson orin nano when running this script.

python -o yolov7.onnx -e yolov7.trt --end2end -p int8 --calib_input /path/to/calib/image/dir/

02/08/2024-12:25:30] [TRT] [E] 10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node PWN(/model.0/act/Sigmoid).)
[02/08/2024-12:25:30] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

My question is why I am getting a Could not find any implementation for node PWN(/model.0/act/Sigmoid), when clearly it is working for the fp16 mode and also on int8 mode using trtexec.

log.txt (387.7 KB)


Since the TensorRT versions are different, would you mind give TensorRT 8.6 a try.
It is available in JetPack 6DP.


I updated to jetpack 6 and tried with tensorrt8.6, but I’m getting the same error.

Funny thing is that, the same code is working in an 8 GB orin nano.
Is it because the memory limitation of the 4GB module? I would like to say that I’m not setting the workspace memory, since it uses Max memory by default.

But the conversion is working with trtexec on 4GB module. This means the issue is not because of the memory limitation.

I hope you can give a fix soon, it’s been quite some time Im stuck with this issue. And this is not a independent error for yolov7. Im getting the same error for yolov8m/l/xl models as well.



Have you checked with the script owner?
Since the conversion can work well with TensorRT, the script might need to change some configure to make the conversion work on Orin Nano.