Nvidia isaac ros with yolov8

Hey all, i’m using the infrastructure of yolov5 isaac ros to insert a yolov8 (yolov8n.pt) model
while the first attempt didn’t work due to the fact yolov8 uses dynamic shape, i used onnx-runtime python tool to insert my desired input shape.
no i get this error:

appliation...
[component_container_mt-1] Could not open file /workspaces/isaac_ros-dev/src/yolov8n_nodyn.onnx
[component_container_mt-1] Could not open file /workspaces/isaac_ros-dev/src/yolov8n_nodyn.onnx
[component_container_mt-1] 2023-09-07 10:51:47.370 ERROR gxf/tensor_rt/tensor_rt_inference.cpp@143: TRT ERROR: ModelImporter.cpp:735: Failed to parse ONNX model from file: /workspaces/isaac_ros-dev/src/yolov8n_nodyn.onnx
[component_container_mt-1] 2023-09-07 10:51:47.370 ERROR gxf/tensor_rt/tensor_rt_inference.cpp@464: Failed to parse ONNX file /workspaces/isaac_ros-dev/src/yolov8n_nodyn.onnx
[component_container_mt-1] 2023-09-07 10:51:47.425 ERROR gxf/tensor_rt/tensor_rt_inference.cpp@277: Failed to create engine plan for model /workspaces/isaac_ros-dev/src/yolov8n_nodyn.onnx.
[component_container_mt-1] 2023-09-07 10:51:47.425 ERROR gxf/std/entity_executor.cpp@200: Entity with 8 not found!
[component_container_mt-1] [ERROR] [1694083907.425673984] [tensor_rt]: [NitrosPublisher] Vault ("vault/vault", eid=8) was stopped. The graph may have been terminated due to an error.
[component_container_mt-1] terminate called after throwing an instance of 'std::runtime_error'
[component_container_mt-1]   what():  [NitrosPublisher] Vault ("vault/vault", eid=8) was stopped. The graph may have been terminated due to an error.
[ERROR] [component_container_mt-1]: process has died [pid 13248, exit code -6, cmd '/opt/ros/humble/install/lib/rclcpp_components/component_container_mt --ros-args -r __node:=tensor_rt_container -r __ns:=/isaac_ros_tensor_rt'].

Any thoughts?

Have you checked that the Onnx file itself is still readable? It looks like the TensorRT Inference node is failing to read the ONNX file altogether and thus cannot generate the TensorRT Model Plan file. You could either try using trtexec to generate a Model Plan file on the target platform to verify all is good there and then use that model engine plan file directly or try Triton node which will use the ONNX Runtime itself to run inference. An example for YOLOv8 is coming, by the way, in our next release too next month.

Great to hear your releasing yolov8 with issac ros!
Iv’e tried running the onnx file with onnx-runtime inference and it works perfect, so the problem isn’t with the onnx file.
the method of compilation i used is running the onnx file through isaac:
Compiling onnx file