Hello, I have a guess as to what may be happening.
When we do op_output.emit(out_message, "output_tensor") and I assume in setup() spec.output("output_tensor"), “output_tensor” is specifying the output port name, which is different from the tensor name. Please see the example under You can add multiple tensors to a single holoscan.gxf.Entity object by calling the add() method multiple times with a unique name for each tensor, as in the example below: in Creating Operators - NVIDIA Docs
In the MultiAI Ultrasound example, we see that plax_cham_pre_proc is the output tensor name from the preprocessor (FormatConverter) holohub/multiai_ultrasound.yaml at main · nvidia-holoscan/holohub · GitHub, and when connecting the operators we’re not using tensor names but port names, in this case the outgoing port name from the FormatConverter is “”, holohub/multiai_ultrasound.py at main · nvidia-holoscan/holohub · GitHub, and then in the downstream operator it referenes the tensor name plax_cham_pre_proc again holohub/multiai_ultrasound.yaml at main · nvidia-holoscan/holohub · GitHub.
Given the info above, could you try to add a unique name to the tensor when adding to the output in your custom op?
out_message.add(output_tensor, "your_unique_name")
and reference that name in the MultiAIinference spec in yaml:
inference: # MultiaAIInference
backend: "trt"
pre_processor_map:
"byom_model": ["your_unique_name"]
inference_map:
"byom_model": "output"
in_tensor_names: ["your_unique_name"]
out_tensor_names: ["output"]