Hi Team,
I am currently working with carter robot, where I have animated a human using the omni.anim.people extension.
My robot is freely traversing the environment and, while doing so, my objective is to visually identify the animated human and impose a bounding box around it. To achieve this, I have successfully generated synthetic data of human and utilized it to train a YOLOv4 model.
In an isolated environment, the YOLOv4 model performs as expected, successfully detecting and putting a bounding box around the human.
However, my goal is to replicate this result within the Isaac Sim view port when running the robot using ROS/ROS2 Navigation. I am looking to implement real-time inferencing of my YOLOv4 model in the Isaac Sim, ensuring that the model detects and accurately places a bounding box around the animated human during play mode.
I am using ROS2 humble
with Isaac Sim 2023.1 with Ubuntu 22.04 version.
Please help.