How to use my own RT-DETR model for FoundationPose?

Hi,

I have already run the toturials of isaac_ros_rtdetr and isaac_ros_foundationpose using Realsense D435 camera and can successfully detect objects using example codes
Later I use my customized dataset to train RE-DETR model (original pytorch version, not Ultralytics version). I confirm it can run on issac_ros_rtdetr example code with help of previous post (enable normalize and adjust confidence_threshold), I can detect my trained target object

Right now I can normally run isaac_ros_foundationpose example code with my CAD file and RT-DETR model (no obvious error message)

ros2 launch isaac_ros_examples isaac_ros_examples.launch.py \
 launch_fragments:=realsense_mono_rect_depth,foundationpose \
 mesh_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/my_object.obj \
 score_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan \
 refine_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan \
 rt_detr_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/my_rtdetr.plan

But I can’t get any detection results. I check topic /detections_output and get nothing. (Rviz can normally show video stream)
I try to enable normalization in issac_ros_foundationpose_core.launch.py (image_to_tensor_node scale set to True), and still can’t get any detection result (I build foundationpose packge from source). Since I can use same model in issac_ros_rtdetr example code and detect objects, I think model file is not the problem

Please help me solve this issue, thank you.

1 Like

Hi @s5078345

Welcome to the Isaac ROS forum.

The best way to make a new model is to follow our tutorial Tutorial to create your own 3D object mesh for FoundationPose — isaac_ros_docs documentation

Different cameras can produce different types of mesh. We suggest using an iPhone 12 Pro or a similar model for optimal results.

Best,
Raffaello

Did you try changing the confidence_threshold parameter for RT-DETR?

@ros_nitros Thanks for your help! In original isaac_ros_foundationpose_core.launch.py there’s no parameter confidence_threshold to adjust. I refer isaac_ros_rtdetr.launch.py and manually add confidence_threshold in rtdetr_decoder_node, and right now it has output for topics /detections_output and/output

@Raffaello Right now I have an unstable output for pose estimation (expecially for predicted orientation), I’m not sure if pose estimation is hard for my task because the object class I trained is inherently difficult (my target object is rather small, 2.5cm x 1.5cm x 1.5cm, and not similiar object compared to SyntheticaDETR model dataset)
My target object is printed by 3D printer using CAD file, so I thank CAD file is accurate

I vaguely remember from other posts in forum, now there’s no public open-source method to fine-tune FoundationPose model (refine model/score model)

Is there any suggestions how to improve accracy for pose estimation?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.