TensorRT Version: 10.3 GPU Type: Jetson Nvidia Driver Version: CUDA Version: 8.0 Operating System + Version: Jetson Nano Baremetal or Container (if container which image + tag): Jetpack 4.6
i installed python onnx_runtime library but also i want to run in onnx_runtime in c++ api.
so how can i build onnx_runtime c++ api? (there are no documents about this problem…]
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
Hi,
i have the same problem: try to load ONNX model on jetson, i did as this answer and found libonnxruntime_providers_cuda.so, libonnxruntime_providers_shared.so and libonnxruntime_providers_tensorrt.so as you said. However i still am missing appropriate libonnxruntime.so and libonnxruntime.so.1.12.0 and also onnxruntime_c_api.h
where can I find them?
P/s: i found on the release page of ONNX : [onnxruntime-linux-aarch64-1.11.0.tgz], which works on jetson nano with all files but only CPU and very slow. any combination of this release and the files from the answer cause core dump error