Can jetson nano Jetpack 4.6 install Onnxruntime 1.10+ ? If yes, please help me . Thanks
Please refer to Cannot install onnxruntime on jetson nano - Jetson & Embedded Systems / Jetson Nano - NVIDIA Developer Forums to see if can help.
I use C++ , this is my code : std::vectorstd::string availableProviders = Ort::GetAvailableProviders();
auto cudaAvailable = std::find(availableProviders.begin(), availableProviders.end(), “CUDAExecutionProvider”);
OrtCUDAProviderOptions cudaOption;
LOG(INFO) << "=============== Model info ===============";
LOG(INFO) << "Onnxruntime Version:" << ORT_API_VERSION;
if (isGPU && (cudaAvailable == availableProviders.end()))
{
LOG(WARN) << "GPU is not supported by your ONNXRuntime build. Fallback to CPU.";
LOG(INFO) << "Inference device: CPU";
}
else if (isGPU && (cudaAvailable != availableProviders.end()))
{
LOG(INFO) << "Inference device: GPU";
sessionOptions.AppendExecutionProvider_CUDA(cudaOption);
}
else
{
LOG(INFO) << "Inference device: CPU";
}
==>
[INFO ] =============== Model info ===============
[INFO ] Onnxruntime Version:10
[WARN ] GPU is not supported by your ONNXRuntime build. Fallback to CPU.
[INFO ] Inference device: CPU
When I found out, it seems that the onnxruntime version is not compatible with CUDA (only CUDA 11+ can use the GPU while jetson nano does not support CUDA 11+). Is it right ?
@nguyen.thai.son1 according to this page, the last version of onnxruntime with official support for CUDA 10.2 (which is the version of CUDA in JetPack 4.6 for Jetson Nano) was onnxruntime 1.6:
I’m not sure if that means that you can or cannot compile it yourself for other versions of CUDA though. How have you installed onnxruntime? I build it from source in the Dockerfile for the l4t-ml container here:
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.