Drone autonomous exploration with Jetson Orin, 3d Lidar and Intel Realsense depth camera

Hey, I’m a Bachelor’s student currently working on a drone project with the goal of the drone being able to autonomously explore a building (only a single floor) and find objects of interest (via 3d object detection/pose estimation). I recently learned about Nvidia Isaac ROS and was wondering if it could be helpful for the project.


  • Jetson Xavier NX
  • Velodyne VLP16 Lidar
  • Intel Realsense D455
  • MTI-610 high-resolution IMU
  • Pixhawk Flight Controller (using Mavlink for communication)

I saw that Isaac ROS includes both Visual SLAM and 3d object detection / Pose estimation. Is there a way to include the Lidar information for SLAM? Also is there a plan to add support for a Pose Esimation algorithm that makes use of depth information from a depth camera like the Intel Realsense?

Any help would be greatly appreciated.

1 Like

haven’t got an answer unfortunately, I’m working on a similar project but using multiple agents to find the targeted object. Can i ask which autopilot firmware are you using?
What you’re asking for is also something I’m looking for so if i’ve got an answer, your post is saved so I’ll comeback and let you know

Hey @rbouderrah thanks for the reply. We are currently using PX4 but are considering changing to Ardupilot. We are currently communicating with the drone via Mavros. If I make any progress I will update this post to let you know.

Hi @Gi_T

Let me help you on Isaac ROS Visual SLAM

Currently, Isaac ROS Visual SLAM 2.1.0 doesn’t support any Lidar inputs, but we are working to add this feature.

We are working to add these features, but right now, the Pose estimation only uses a camera image.
For any other issue, let me know.

Meanwhile, I suggest following all our events on robotics during the GTC. We made a list of sessions that you should not miss: NVIDIA GTC 2024 - robotics sessions that you must don't miss!