We want to pass an input tensor from a custom Holoscan operator into MultiAIInferenceOp in Holoscan v0.5.0.
However, we get the following error:
2023-06-12 00:02:12.516 ERROR /workspace/holoscan-sdk/modules/holoinfer/src/utils/infer_utils.cpp@25: Error in Multi AI Inference Codelet, Sub-module->Data_per_tensor, Tensor output_tensor not found
It appears that MultiAIInferenceOp is not able to locate the tensor “output_tensor”.
In our custom operator, do we need to structure or name our output tensor so that MultiAIInferenceOp can find it?
We based our python code on the Holoscan documentation and examples on tensor interoperability. We have the following code in our compute() method in our custom operator (ImageProcessingOp):
out_message = Entity(context)
output_tensor = hs.as_tensor(cp_array)
out_message.add(output_tensor)
op_output.emit(out_message, "output_tensor")
We also have the following in our .yaml file:
inference: # MultiaAIInference
backend: "trt"
pre_processor_map:
"byom_model": ["output_tensor"]
inference_map:
"byom_model": "output"
in_tensor_names: ["output_tensor"]
out_tensor_names: ["output"]
We also labeled the input/output ports between the custom operator (ImageProcessingOp) and the MultiAIInferenceOp:
self.add_flow(image_processing, inference, {("output_tensor", "receivers")})
self.add_flow(inference, postprocessor, {("transmitter", "in_tensor")})
Here is the full error message for reference:
2023-06-12 00:02:12.118 WARN /workspace/holoscan-sdk/include/holoscan/utils/cuda_stream_handler.hpp@220: Parameter `cuda_stream_pool` is not set, using the default CUDA stream for CUDA operations.
message received (count: 1)
<class 'holoscan.core._core.Tensor'>
2023-06-12 00:02:12.516 ERROR /workspace/holoscan-sdk/modules/holoinfer/src/utils/infer_utils.cpp@25: Error in Multi AI Inference Codelet, Sub-module->Data_per_tensor, Tensor output_tensor not found
[2023-06-12 00:02:12.516] [holoscan] [error] [gxf_wrapper.cpp:68] Exception occurred for operator: 'inference' - Error in Multi AI Inference Codelet, Sub-module->Tick, Inference execution, Message->Error in Multi AI Inference Codelet, Sub-module->Tick, Data extraction
2023-06-12 00:02:12.516 ERROR gxf/std/entity_executor.cpp@525: Failed to tick codelet inference in entity: inference code: GXF_FAILURE
2023-06-12 00:02:12.516 ERROR gxf/std/entity_executor.cpp@556: Entity [inference] must be in Lifecycle::kStarted or Lifecycle::kIdle stage before stopping. Current state is Ticking
2023-06-12 00:02:12.516 WARN gxf/std/greedy_scheduler.cpp@235: Error while executing entity 26 named 'inference': GXF_FAILURE
2023-06-12 00:02:12.517 ERROR gxf/std/entity_executor.cpp@556: Entity [inference] must be in Lifecycle::kStarted or Lifecycle::kIdle stage before stopping. Current state is Ticking
2023-06-12 00:02:12.605 INFO gxf/std/greedy_scheduler.cpp@367: Scheduler finished.
2023-06-12 00:02:12.605 ERROR gxf/std/program.cpp@497: wait failed. Deactivating...
2023-06-12 00:02:12.606 ERROR gxf/core/runtime.cpp@1251: Graph wait failed with error: GXF_FAILURE
2023-06-12 00:02:12.606 PANIC /workspace/holoscan-sdk/src/core/executors/gxf/gxf_executor.cpp@296: GXF operation failed: GXF_FAILURE
#01 holoscan::gxf::GXFExecutor::run(holoscan::Graph&) /opt/nvidia/holoscan/python/lib/holoscan/graphs/../../../../lib/libholoscan_core.so.0(_ZN8holoscan3gxf11GXFExecutor3runERNS_5GraphE+0x280c) [0x7fc28bd76b7c]
#02 /opt/nvidia/holoscan/python/lib/holoscan/core/_core.cpython-38-x86_64-linux-gnu.so(+0xae5d0) [0x7fc2895a65d0]
#03 /opt/nvidia/holoscan/python/lib/holoscan/core/_core.cpython-38-x86_64-linux-gnu.so(+0x8cb6c) [0x7fc289584b6c]
#04 /opt/nvidia/holoscan/python/lib/holoscan/core/_core.cpython-38-x86_64-linux-gnu.so(+0xb699f) [0x7fc2895ae99f]
#05 python(PyCFunction_Call+0x59) [0x5f5e79]
#06 python(_PyObject_MakeTpCall+0x296) [0x5f6a46]
#07 python() [0x50b4a7]
#08 python(_PyEval_EvalFrameDefault+0x5706) [0x5703e6]
#09 python(_PyEval_EvalCodeWithName+0x26a) [0x5696da]
#10 python(PyEval_EvalCode+0x27) [0x68db17]
#11 python() [0x67eeb1]
#12 python() [0x67ef2f]
#13 python() [0x67efd1]
#14 python(PyRun_SimpleFileExFlags+0x197) [0x67f377]
#15 python(Py_RunMain+0x212) [0x6b7902]
#16 python(Py_BytesMain+0x2d) [0x6b7c8d]
#17 /usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7fc2a93dc0b3]
#18 python(_start+0x2e) [0x5fb12e]
We have been successful in passing tensors between custom operators and even between built-in Holoscan operators (eg. FormatConverterOp) and a custom operator but have been unable to do this with MultiAIInferenceOp.
Thank you for any assistance or guidance you can provide!