Hi,
We can run your model with JetPack 4.6.3.
Please try our command and test it again.
$ /usr/src/tensorrt/bin/trtexec --onnx=yolov8n.onnx
&&&& RUNNING TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=yolov8n.onnx
...
[03/15/2023-15:20:07] [I]
[03/15/2023-15:20:07] [I] === Trace details ===
[03/15/2023-15:20:07] [I] Trace averages of 10 runs:
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3103 ms - Host latency: 12.5592 ms (end to end 12.5684 ms, enqueue 3.4141 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3089 ms - Host latency: 12.5581 ms (end to end 12.5707 ms, enqueue 3.2359 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3176 ms - Host latency: 12.5658 ms (end to end 12.5755 ms, enqueue 3.17767 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3133 ms - Host latency: 12.5615 ms (end to end 12.5723 ms, enqueue 3.15106 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3174 ms - Host latency: 12.5663 ms (end to end 12.5772 ms, enqueue 3.2137 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.313 ms - Host latency: 12.5623 ms (end to end 12.5723 ms, enqueue 3.09373 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3168 ms - Host latency: 12.5658 ms (end to end 12.5759 ms, enqueue 3.10371 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3071 ms - Host latency: 12.5553 ms (end to end 12.5655 ms, enqueue 3.09354 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3154 ms - Host latency: 12.5646 ms (end to end 12.5739 ms, enqueue 3.12062 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.317 ms - Host latency: 12.5661 ms (end to end 12.5771 ms, enqueue 3.16703 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3064 ms - Host latency: 12.5543 ms (end to end 12.5644 ms, enqueue 3.13046 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3166 ms - Host latency: 12.5645 ms (end to end 12.5752 ms, enqueue 3.08904 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3103 ms - Host latency: 12.5584 ms (end to end 12.5689 ms, enqueue 3.10193 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3223 ms - Host latency: 12.5711 ms (end to end 12.5802 ms, enqueue 3.06229 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3124 ms - Host latency: 12.5607 ms (end to end 12.5708 ms, enqueue 3.06226 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3215 ms - Host latency: 12.5708 ms (end to end 12.5816 ms, enqueue 2.94758 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3249 ms - Host latency: 12.5744 ms (end to end 12.585 ms, enqueue 3.04656 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3104 ms - Host latency: 12.5584 ms (end to end 12.5685 ms, enqueue 3.04551 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3115 ms - Host latency: 12.5603 ms (end to end 12.5721 ms, enqueue 3.01611 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3172 ms - Host latency: 12.5652 ms (end to end 12.5762 ms, enqueue 3.00647 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3176 ms - Host latency: 12.5664 ms (end to end 12.5785 ms, enqueue 2.98127 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3139 ms - Host latency: 12.5638 ms (end to end 12.575 ms, enqueue 2.95164 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3098 ms - Host latency: 12.5583 ms (end to end 12.5689 ms, enqueue 3.04917 ms)
[03/15/2023-15:20:07] [I] Average on 10 runs - GPU latency: 12.3193 ms - Host latency: 12.5671 ms (end to end 12.5771 ms, enqueue 2.9856 ms)
[03/15/2023-15:20:07] [I]
[03/15/2023-15:20:07] [I] === Performance summary ===
[03/15/2023-15:20:07] [I] Throughput: 79.5293 qps
[03/15/2023-15:20:07] [I] Latency: min = 12.521 ms, max = 12.6267 ms, mean = 12.5634 ms, median = 12.5608 ms, percentile(99%) = 12.6165 ms
[03/15/2023-15:20:07] [I] End-to-End Host Latency: min = 12.536 ms, max = 12.6343 ms, mean = 12.5739 ms, median = 12.5721 ms, percentile(99%) = 12.6284 ms
[03/15/2023-15:20:07] [I] Enqueue Time: min = 1.94434 ms, max = 3.63049 ms, mean = 3.09275 ms, median = 3.07898 ms, percentile(99%) = 3.46201 ms
[03/15/2023-15:20:07] [I] H2D Latency: min = 0.146362 ms, max = 0.155029 ms, mean = 0.14786 ms, median = 0.147888 ms, percentile(99%) = 0.149414 ms
[03/15/2023-15:20:07] [I] GPU Compute Time: min = 12.2688 ms, max = 12.3779 ms, mean = 12.3148 ms, median = 12.3128 ms, percentile(99%) = 12.3662 ms
[03/15/2023-15:20:07] [I] D2H Latency: min = 0.0856934 ms, max = 0.105591 ms, mean = 0.100774 ms, median = 0.100952 ms, percentile(99%) = 0.104614 ms
[03/15/2023-15:20:07] [I] Total Host Walltime: 3.03033 s
[03/15/2023-15:20:07] [I] Total GPU Compute Time: 2.96787 s
[03/15/2023-15:20:07] [I] Explanations of the performance metrics are printed in the verbose logs.
[03/15/2023-15:20:07] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=yolov8n.onnx
Thanks.