Tensor Interoperability with MultiAIInferenceOp

We want to pass an input tensor from a custom Holoscan operator into MultiAIInferenceOp in Holoscan v0.5.0.

However, we get the following error:

2023-06-12 00:02:12.516 ERROR /workspace/holoscan-sdk/modules/holoinfer/src/utils/infer_utils.cpp@25: Error in Multi AI Inference Codelet, Sub-module->Data_per_tensor, Tensor output_tensor not found

It appears that MultiAIInferenceOp is not able to locate the tensor “output_tensor”.

In our custom operator, do we need to structure or name our output tensor so that MultiAIInferenceOp can find it?

We based our python code on the Holoscan documentation and examples on tensor interoperability. We have the following code in our compute() method in our custom operator (ImageProcessingOp):

        out_message = Entity(context)
        output_tensor = hs.as_tensor(cp_array)

        out_message.add(output_tensor)
        op_output.emit(out_message, "output_tensor")

We also have the following in our .yaml file:

inference:  # MultiaAIInference
  backend: "trt"
  pre_processor_map: 
    "byom_model": ["output_tensor"]
  inference_map: 
    "byom_model": "output"
  in_tensor_names: ["output_tensor"]
  out_tensor_names: ["output"]

We also labeled the input/output ports between the custom operator (ImageProcessingOp) and the MultiAIInferenceOp:

        self.add_flow(image_processing, inference, {("output_tensor", "receivers")})
        self.add_flow(inference, postprocessor, {("transmitter", "in_tensor")})

Here is the full error message for reference:

2023-06-12 00:02:12.118 WARN  /workspace/holoscan-sdk/include/holoscan/utils/cuda_stream_handler.hpp@220: Parameter `cuda_stream_pool` is not set, using the default CUDA stream for CUDA operations.
message received (count: 1)
<class 'holoscan.core._core.Tensor'>
2023-06-12 00:02:12.516 ERROR /workspace/holoscan-sdk/modules/holoinfer/src/utils/infer_utils.cpp@25: Error in Multi AI Inference Codelet, Sub-module->Data_per_tensor, Tensor output_tensor not found

[2023-06-12 00:02:12.516] [holoscan] [error] [gxf_wrapper.cpp:68] Exception occurred for operator: 'inference' - Error in Multi AI Inference Codelet, Sub-module->Tick, Inference execution, Message->Error in Multi AI Inference Codelet, Sub-module->Tick, Data extraction
2023-06-12 00:02:12.516 ERROR gxf/std/entity_executor.cpp@525: Failed to tick codelet inference in entity: inference code: GXF_FAILURE
2023-06-12 00:02:12.516 ERROR gxf/std/entity_executor.cpp@556: Entity [inference] must be in Lifecycle::kStarted or Lifecycle::kIdle stage before stopping. Current state is Ticking
2023-06-12 00:02:12.516 WARN  gxf/std/greedy_scheduler.cpp@235: Error while executing entity 26 named 'inference': GXF_FAILURE
2023-06-12 00:02:12.517 ERROR gxf/std/entity_executor.cpp@556: Entity [inference] must be in Lifecycle::kStarted or Lifecycle::kIdle stage before stopping. Current state is Ticking
2023-06-12 00:02:12.605 INFO  gxf/std/greedy_scheduler.cpp@367: Scheduler finished.
2023-06-12 00:02:12.605 ERROR gxf/std/program.cpp@497: wait failed. Deactivating...
2023-06-12 00:02:12.606 ERROR gxf/core/runtime.cpp@1251: Graph wait failed with error: GXF_FAILURE
2023-06-12 00:02:12.606 PANIC /workspace/holoscan-sdk/src/core/executors/gxf/gxf_executor.cpp@296: GXF operation failed: GXF_FAILURE
#01 holoscan::gxf::GXFExecutor::run(holoscan::Graph&) /opt/nvidia/holoscan/python/lib/holoscan/graphs/../../../../lib/libholoscan_core.so.0(_ZN8holoscan3gxf11GXFExecutor3runERNS_5GraphE+0x280c) [0x7fc28bd76b7c]
#02 /opt/nvidia/holoscan/python/lib/holoscan/core/_core.cpython-38-x86_64-linux-gnu.so(+0xae5d0) [0x7fc2895a65d0]
#03 /opt/nvidia/holoscan/python/lib/holoscan/core/_core.cpython-38-x86_64-linux-gnu.so(+0x8cb6c) [0x7fc289584b6c]
#04 /opt/nvidia/holoscan/python/lib/holoscan/core/_core.cpython-38-x86_64-linux-gnu.so(+0xb699f) [0x7fc2895ae99f]
#05 python(PyCFunction_Call+0x59) [0x5f5e79]
#06 python(_PyObject_MakeTpCall+0x296) [0x5f6a46]
#07 python() [0x50b4a7]
#08 python(_PyEval_EvalFrameDefault+0x5706) [0x5703e6]
#09 python(_PyEval_EvalCodeWithName+0x26a) [0x5696da]
#10 python(PyEval_EvalCode+0x27) [0x68db17]
#11 python() [0x67eeb1]
#12 python() [0x67ef2f]
#13 python() [0x67efd1]
#14 python(PyRun_SimpleFileExFlags+0x197) [0x67f377]
#15 python(Py_RunMain+0x212) [0x6b7902]
#16 python(Py_BytesMain+0x2d) [0x6b7c8d]
#17 /usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fc2a93dc0b3]
#18 python(_start+0x2e) [0x5fb12e]

We have been successful in passing tensors between custom operators and even between built-in Holoscan operators (eg. FormatConverterOp) and a custom operator but have been unable to do this with MultiAIInferenceOp.

Thank you for any assistance or guidance you can provide!

Hello, I have a guess as to what may be happening.

When we do op_output.emit(out_message, "output_tensor") and I assume in setup() spec.output("output_tensor"), “output_tensor” is specifying the output port name, which is different from the tensor name. Please see the example under You can add multiple tensors to a single holoscan.gxf.Entity object by calling the add() method multiple times with a unique name for each tensor, as in the example below: in Creating Operators - NVIDIA Docs

In the MultiAI Ultrasound example, we see that plax_cham_pre_proc is the output tensor name from the preprocessor (FormatConverter) holohub/multiai_ultrasound.yaml at main · nvidia-holoscan/holohub · GitHub, and when connecting the operators we’re not using tensor names but port names, in this case the outgoing port name from the FormatConverter is “”, holohub/multiai_ultrasound.py at main · nvidia-holoscan/holohub · GitHub, and then in the downstream operator it referenes the tensor name plax_cham_pre_proc again holohub/multiai_ultrasound.yaml at main · nvidia-holoscan/holohub · GitHub.

Given the info above, could you try to add a unique name to the tensor when adding to the output in your custom op?

out_message.add(output_tensor, "your_unique_name")

and reference that name in the MultiAIinference spec in yaml:

inference:  # MultiaAIInference
  backend: "trt"
  pre_processor_map: 
    "byom_model": ["your_unique_name"]
  inference_map: 
    "byom_model": "output"
  in_tensor_names: ["your_unique_name"]
  out_tensor_names: ["output"]
1 Like

Thank you very much for your help and the detailed response! We were able to name the tensor using the approach you suggested and get it working.

Great to hear that it’s working!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.