ToolTrackingPostprocessorOp : Expected tensors - probs is must?

Hi,
I am trying to use ToolTrackingPostprocessorOp to perform object tracking, my inference model outputs detection box coordinates and scores. when I try to add the flow from TensorRTInferenceOp to ToolTrackingPostprocessorOp, I am getting below error. Do my model need to output with name ‘probs’ tensor?
Also, can I use TensorRTInferenceOp instead of LSTMTensorRTInferenceOp?
[holoscan] [error] [entity.hpp:106] Unable to find component from the name ‘probs’ (error code: 24)
[holoscan] [error] [gxf_wrapper.cpp:68] Exception occurred for operator: ‘tool_tracking_postprocessor’ - Tensor ‘probs’ not found in message.

Hi @Nakamise ,
Yes it looks like your tensor does not have the name configured elsewhere in the application. Do you use an application from holohub, or is this a custom application?
Check for the expected tensor names in the compute() method of the operator, and in the yaml file of the application. You can also share them here if you have custom files.
If you are using a different model, check that the tensor names of the model are correct. You can use https://netron.app/ to look at your onnx files

Thank you for the reponse @Mofir

Yes, My model does not output Tensor with name “probs”
I am using a custom application which makes detection and Overlay the frame with bounding box(bounding box is created with an operator which takes inference co-ordinates)

For inferencing I use TensorRTInferenceOp anf thought to use ToolTrackingPostprocessorOp for tracking the detected object.
I refered ToolTrackingPostprocessorOp compute method, it requires tensor named probs, scaled_coords & binary_masks

It looks like the I cannot use TensorRTInferenceOp as these cannot output belowas ToolTrackingPostprocessorOp((Holoscans operator)) expects these tensors

- probs
- scaled_coords
- binary_masks

Are there any methods to use my model tensor outputs for object tracking?

Are there ways to override the ToolTrackingPostprocessorOp expected tensor names and perform object tracking?

void ToolTrackingPostprocessorOp::compute(InputContext& op_input, OutputContext& op_output,

  •                                      ExecutionContext& context) {*
    
  • // The type of in_message is ‘holoscan::gxf::Entity’.*

  • auto in_message = op_input.receivegxf::Entity(“in”).value();*

  • auto maybe_tensor = in_message.get(“probs”);*
    *** if (!maybe_tensor) { throw std::runtime_error(“Tensor ‘probs’ not found in message.”); }***

  • auto probs_tensor = maybe_tensor;*

  • // get the CUDA stream from the input message*

  • gxf_result_t stream_handler_result =*

  •  cuda_stream_handler_.fromMessage(context.context(), in_message);*
    
  • if (stream_handler_result != GXF_SUCCESS) {*

  • throw std::runtime_error(“Failed to get the CUDA stream from incoming messages”);*

  • }*

  • std::vector probs(probs_tensor->size());*

  • CUDA_TRY(cudaMemcpyAsync(probs.data(),*

  •                       probs_tensor->data(),*
    
  •                       probs_tensor->nbytes(),*
    
  •                       cudaMemcpyDeviceToHost,*
    
  •                       cuda_stream_handler_.getCudaStream(context.context())));*
    

*** maybe_tensor = in_message.get(“scaled_coords”);***
*** if (!maybe_tensor) { throw std::runtime_error(“Tensor ‘scaled_coords’ not found in message.”); }***

  • auto scaled_coords_tensor = maybe_tensor;*

  • std::vector scaled_coords(scaled_coords_tensor->size());*

  • CUDA_TRY(cudaMemcpyAsync(scaled_coords.data(),*

  •                       scaled_coords_tensor->data(),*
    
  •                       scaled_coords_tensor->nbytes(),*
    
  •                       cudaMemcpyDeviceToHost,*
    
  •                       cuda_stream_handler_.getCudaStream(context.context())));*
    

*** maybe_tensor = in_message.get(“binary_masks”);***
*** if (!maybe_tensor) { throw std::runtime_error(“Tensor ‘binary_masks’ not found in message.”); }***
*** auto binary_masks_tensor = maybe_tensor;***

Based on the class names it seems you are using an older verison of Holoscan SDK.
Please consider upgrading, this will help with getting more verbose error messages and gives you an opportunity to inspect the outputs and inputs better.

Consider using Class InferenceOp - NVIDIA Docs
, which allows using trt as backend.

Also see the application for tool tracking on holohub, holohub/applications/endoscopy_tool_tracking_distributed at main · nvidia-holoscan/holohub · GitHub
It seems similar to your usecase.

adding an additional operator for the conversion of your outputs is of course possible.
Make sure to name the tensors of that you output from your inference operator in accordance with the next downstream operator. Check the yaml file or instantiation of the inference operator to see configured tensor names.