Onnxruntime with CUDA not supported on aarch64

Hello,

I wanted to try out the inference operator in the newly released holoscan version (2.7.0) but the operator runs into the same problem namely that onnxruntime with cuda is not support on ARM.

[error] [inference.cpp:230] Error in Inference Operator, Sub-module->Onnxruntime with CUDA not supported on aarch64.
[error] [gxf_wrapper.cpp:57] Exception occurred when starting operator: 'Inference' - Error in Inference Operator, Sub-module->Onnxruntime with CUDA not supported on aarch64.

config

inference:
  backend: "onnxrt"
  enable_fp16: false
  parallel_inference: false
  infer_on_cpu: false
  input_on_cuda: true
  output_on_cuda: true
  transmit_on_cuda: true
  is_engine_path: false
  pre_processor_map: 
    "yolo_detect": ["INPUT__0"]
  inference_map:
    "yolo_detect": ["num_dets", "bboxes", "scores", "labels"]

Any help is greatly appreciated.
Best regards,
Farid

Hi Farid,
Could you check if really version 2.7.0 is used?
The error string you see is not present in in 2.7.0.
The line throwing errors in the InferencOp is now 225 in 2.7 not 230 like in the error message [error] [inference.cpp:230] Error in Inference Operator, Sub-module->Onnxruntime with CUDA not supported on aarch64. you get.
Thanks,
Andreas