Hello,
I wanted to try out the inference operator in the newly released holoscan version (2.7.0) but the operator runs into the same problem namely that onnxruntime with cuda is not support on ARM.
[error] [inference.cpp:230] Error in Inference Operator, Sub-module->Onnxruntime with CUDA not supported on aarch64.
[error] [gxf_wrapper.cpp:57] Exception occurred when starting operator: 'Inference' - Error in Inference Operator, Sub-module->Onnxruntime with CUDA not supported on aarch64.
config
inference:
backend: "onnxrt"
enable_fp16: false
parallel_inference: false
infer_on_cpu: false
input_on_cuda: true
output_on_cuda: true
transmit_on_cuda: true
is_engine_path: false
pre_processor_map:
"yolo_detect": ["INPUT__0"]
inference_map:
"yolo_detect": ["num_dets", "bboxes", "scores", "labels"]
Any help is greatly appreciated.
Best regards,
Farid