Hi everyone,
I’m trying to run the isaac_ros_yolov8 demo on a Jetson Orin and I’m running into a persistent error during the TensorRT engine build. The Rebuilding CUDA engine process runs for several minutes and then fails with an exit code -11 (Segmentation Fault) and a cryptic TensorRT error about an invalid tensor name.
Any help in resolving this would be greatly appreciated.
System Details
Hardware: Jetson Orin Nano
Software: JetPack 6.2.1
ROS: ROS 2 Humble
Problem Description
When I run the following command, the node attempts to build the .plan engine from the .onnx file.
Bash
ros2 launch isaac_ros_yolov8 yolov8_tensor_rt.launch.py model_file_path:=./isaac_ros_assets/models/yolov8/yolov8n.onnx …
The log shows the following message and hangs for 3-4 minutes:
WARN ./gxf/extensions/tensor_rt/tensor_rt_inference.cpp@284: Rebuilding CUDA engine …
After that time, the process fails and the component container crashes.
Troubleshooting Steps Taken
I have attempted to solve this in several ways without success:
Verified Tensor Names: I used Netron to confirm the input/output names are images and output0, which is correct.
Re-exported ONNX Model: I exported the yolov8n.pt model from ultralytics using high-compatibility parameters: opset=13 and dynamic=False.
Increased SWAP Memory: I increased the SWAP file to 8GB to rule out out-of-memory issues. I monitored usage with jtop during compilation, and RAM usage never exceeded 7GB, so memory does not seem to be the cause.
Despite all of this, the error remains identical.
Latest Error Log
This is the error that consistently appears after trying all previous solutions:
[component_container_mt-1] 2025-07-20 22:17:18.622 WARN ./gxf/extensions/tensor_rt/tensor_rt_inference.cpp@284: Rebuilding CUDA engine ./isaac_ros_assets/models/yolov8/yolov8n.plan (forced by config). Note: this process may take up to several minutes.
[component_container_mt-1] 2025-07-20 22:20:48.129 ERROR ./gxf/extensions/tensor_rt/tensor_rt_inference.cpp@154: TRT ERROR: ICudaEngine::getTensorDataType: Error Code 3: Internal Error (Given invalid tensor name: . Get valid tensor names with getIOTensorName())
[component_container_mt-1] 2025-07-20 22:20:48.129 ERROR ./gxf/extensions/tensor_rt/tensor_rt_inference.cpp@154: TRT ERROR: ICudaEngine::getTensorShape: Error Code 3: Internal Error (Given invalid tensor name: . Get valid tensor names with getIOTensorName())
[component_container_mt-1] 2025-07-20 22:20:48.132 ERROR ./gxf/extensions/tensor_rt/tensor_rt_inference.cpp@543: Failed to retrieve Tensor images
[component_container_mt-1] 2025-07-20 22:20:48.132 ERROR gxf/std/entity_executor.cpp@596: Failed to tick codelet in entity: MRFDFLVSQR_inference code: GXF_FAILURE
[component_container_mt-1] [WARN] [1753071648.630574083] [tensor_rt]: [NitrosNode] The heartbeat entity (eid=17) was stopped. The graph may have been terminated.
[ERROR] [component_container_mt-1]: process has died [pid 130768, exit code -11, cmd …
Questions
Given that memory and the ONNX export parameters seem correct, what could be causing this internal TensorRT builder error?
Is there a known incompatibility between YOLOv8 models generated by ultralytics and the TensorRT version included in JetPack 6.2.1?
The ultralytics library uses a tool called onnx-slim. Is it possible this tool is modifying the model in a way that causes issues with TensorRT on Jetson?
Thank you for your time and help.