Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
I succeeded to export from Yolo5 model to ONNX and to run inference, but on Jetson Nano RAM usage is 1.1 GB when inferring frames regardless resolution. Is it normal for Nano to have such high RAM usage?
Are there some benchmarks of RAM usage ?