I am using the Object detection pipeline in which it is working fine for DetectnetV2 models. However, I have my own trained YOLO model which I want to deploy using the same pipeline.
Based on the response I have received, it is clear that I only need to write/change decoder for my own model.
The source code which is provided for the decoder in aforementioned link loads some binaries so I don’t have access to those.
Are there any resources/documentation which can help writing our own decoder for different AI models ?
Thanks in advance
I would also be very interested in this. We clearly need a better documentation!
You can take a look at the DetectNet decoder from Isaac ROS EA3 (v0.9.3) here before we upgraded to NITROS for streaming image data through pipelines. Your decoder node will need to take in a TensorList message from running inference on your model (Triton or TensorRT through
isaac_ros_dnn_inference) and then transform this back into a set of bounding boxes.
So, if I write my own decoder, will it work with the latest (DP 1.1) NITROS or I will need to use the same ROS EA3 release for both
Because I think since NITROS release we are using type adaptation and loading graphs as well.
Thanks for your response
Yes, it should work fine with DP1.1 if you process the
TensorList message in your node. The EA3 code is just a useful reference when implementing your own.
Thanks a lot for all the replies. Closing this issue now.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.