Inferencing Carter Robot

Hi Team,

I am currently working with carter robot, where I have animated a human using the omni.anim.people extension.

My robot is freely traversing the environment and, while doing so, my objective is to visually identify the animated human and impose a bounding box around it. To achieve this, I have successfully generated synthetic data of human and utilized it to train a YOLOv4 model.

In an isolated environment, the YOLOv4 model performs as expected, successfully detecting and putting a bounding box around the human.

However, my goal is to replicate this result within the Isaac Sim view port when running the robot using ROS/ROS2 Navigation. I am looking to implement real-time inferencing of my YOLOv4 model in the Isaac Sim, ensuring that the model detects and accurately places a bounding box around the animated human during play mode.

I am using ROS2 humble
with Isaac Sim 2023.1 with Ubuntu 22.04 version.

Please help.

can any NVIDIA Moderator help me with this…

Hi Arjun,

First I wanted to point out a new replicator tutorial for human detection in Isaac Sim in the bug fix release that happened last week: 10.9. Agent Simulation Synthetic Data Generation — Omniverse IsaacSim latest documentation

Also for running inference with Isaac Sim, maybe this tutorial from Isaac ROS would be helpful: Isaac ROS Object Detection — isaac_ros_docs documentation

Hi @arjun.mangal , for interfacing Isaac Sim with Isaac ROS, you can follow this tutorial. It will walkthrough the steps needed for a human detection use case in Isaac Sim.

Hope this helps!