Hi,
I have already run the toturials of isaac_ros_rtdetr and isaac_ros_foundationpose using Realsense D435 camera and can successfully detect objects using example codes
Later I use my customized dataset to train RE-DETR model (original pytorch version, not Ultralytics version). I confirm it can run on issac_ros_rtdetr example code with help of previous post (enable normalize and adjust confidence_threshold), I can detect my trained target object
Right now I can normally run isaac_ros_foundationpose example code with my CAD file and RT-DETR model (no obvious error message)
ros2 launch isaac_ros_examples isaac_ros_examples.launch.py \
launch_fragments:=realsense_mono_rect_depth,foundationpose \
mesh_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/my_object.obj \
score_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/score_trt_engine.plan \
refine_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/foundationpose/refine_trt_engine.plan \
rt_detr_engine_file_path:=${ISAAC_ROS_WS}/isaac_ros_assets/models/synthetica_detr/my_rtdetr.plan
But I can’t get any detection results. I check topic /detections_output and get nothing. (Rviz can normally show video stream)
I try to enable normalization in issac_ros_foundationpose_core.launch.py (image_to_tensor_node scale set to True), and still can’t get any detection result (I build foundationpose packge from source). Since I can use same model in issac_ros_rtdetr example code and detect objects, I think model file is not the problem
Please help me solve this issue, thank you.