How to convert point cloud from realsense to use as a second lidar

I have tried to find a way to do this but not succeed.

Hi Max,
You can do this using the rgbd_processing DepthImageFlattening component.

First add “rgbd_processing” to “modules” in node graph, and to the BUILD file modules.

Then Add the following node:-

  {
    "name": "image_flattening",
    "components": [
      {
        "name": "message_ledger",
        "type": "isaac::alice::MessageLedger"
      },
      {
        "name": "isaac.rgbd_processing.DepthImageFlattening",
        "type": "isaac::rgbd_processing::DepthImageFlattening"
      },
      {
        "name": "isaac.alice.Throttle",
        "type": "isaac::alice::Throttle"
      }
    ]
  },

Connect up the edges like this:-

  {
    "source": "camera/realsense/depth",
    "target": "image_flattening/isaac.alice.Throttle/input"
  },
  {
    "source": "image_flattening/isaac.alice.Throttle/output",
    "target": "image_flattening/isaac.rgbd_processing.DepthImageFlattening/depth"
  },
  {
    "source": "image_flattening/isaac.rgbd_processing.DepthImageFlattening/flatscan",
    "target": "navigation.subgraph/interface/flatscan_2_for_obstacles"
  },

And finally add this to config:-

"image_flattening": {
  "isaac.rgbd_processing.DepthImageFlattening": {
    "min_distance": 0.1,
    "max_distance": 10.0,
    "height_min": 0.08,
    "height_max": 0.20,
    "ground_frame": "robot",
    "camera_frame": "camera",
    "range_delta": 0.015,
    "sector_delta": 0.008
  },
  "isaac.alice.Throttle": {
    "data_channel": "input",
    "output_channel": "output",
    "minimum_interval": 0.1,
    "use_signal_channel": false
  }
},

Hope you can get that working.

Hi stephen.amos,
First, thank you for your help
Its work.
However, in Sight, it not show the endpoint beam from
navigation.localization.viewers/FlatscanViewer/beam_endpoints

This is my configuration.

    {
      "source": "realsense_camera/RealsenseCamera/depth",
      "target": "realsense_image_flattening/Throttle/input"
    },
    {
      "source": "realsense_image_flattening/Throttle/output",
      "target": "realsense_image_flattening/DepthImageFlattening/depth"
    },
    {
      "source": "realsense_image_flattening/DepthImageFlattening/flatscan",
      "target": "interface/subgraph/realsense_flatscan"
    },

   {
         "source": "main_hardware.interface/subgraph/realsense_flatscan",
         "target": "navigation.subgraph/interface/flatscan_for_localization"
   },
   {
         "source": "main_hardware.interface/subgraph/realsense_flatscan",
         "target": "navigation.subgraph/interface/flatscan_for_obstacles"
   }
  "realsense_image_flattening": {
    "DepthImageFlattening": {
      "min_distance": 0.2,
      "max_distance": 10.0,
      "height_min": 0.08,
      "height_max": 0.20,
      "ground_frame": "robot",
      "camera_frame": "realsense_camera",
      "range_delta": 0.015,
      "sector_delta": 0.008
    },
    "Throttle": {
      "data_channel": "input",
      "output_channel": "output",
      "minimum_interval": 0.1,
      "use_signal_channel": false
    }
  },

Hi Max,
Glad you got that working. You should be able to get the visualisation of the beam lines by adding a FlatscanViewer.

Isaac has a number of special viewer components that can be used to visualise certain message protos in isaac sight. The lidar and the DepthImageFlattening both output FlatScanProto messages, so can be viewed in sight using the FlatscanViewer component like this :-

In your node graph make sure that you have the “viewers” module added, and also in your BUILD file.

Inside the existing image_flattening node add in the viewer component.

  {
    "name": "image_flattening",
    "components": [
      {
        "name": "message_ledger",
        "type": "isaac::alice::MessageLedger"
      },
      {
        "name": "isaac.rgbd_processing.DepthImageFlattening",
        "type": "isaac::rgbd_processing::DepthImageFlattening"
      },
      {
        "name": "isaac.alice.Throttle",
        "type": "isaac::alice::Throttle"
      },
      {
        "name": "FlatscanViewer",
        "type": "isaac::viewers::FlatscanViewer"
      }
    ]
  },

Add a new edge to pipe the flatscan output to the viewer.

  {
    "source": "navigation.subgraph/interface/flatscan_for_localization",
    "target": "image_flattening/FlatscanViewer/flatscan"
  },

Also you will need to give the viewer the reference frame for publishing the beam lines, so add the following into the config for image_flattening like this.

"image_flattening": {
  "isaac.rgbd_processing.DepthImageFlattening": {
    "min_distance": 0.1,
    "max_distance": 10.0,
    "height_min": 0.08,
    "height_max": 0.20,
    "ground_frame": "robot",
    "camera_frame": "realsense_camera",
    "range_delta": 0.015,
    "sector_delta": 0.008
  },
  "isaac.alice.Throttle": {
    "data_channel": "input",
    "output_channel": "output",
    "minimum_interval": 0.1,
    "use_signal_channel": false
  },
  "FlatscanViewer": {
    "flatscan_frame": "robot"
  }
},

When you re-run your app now you should see a new channel appear in the Channels list under image_flattening called FlatscanViewer. You may need to check the box to enable the stream in sight.

You can now see the output from the view by right clicking on the Map viewer and selecting Settings. Then in the Select Channel dropdown find the beam_endpoints and select it. Click Add Channel and then Update to go back to the viewer where you should now see the beam end lines. You can repeat this again and add the beam_lines channel to also view the beam lines.

Hope you can get that working now. I still consider myself a novice with isaac, it can be a steep learning curve with it and there is a lot that is not documented. I have noticed a lot more activity on the forums now, and that’s really good for everyone, so happy I can help.

Hi stephen.amos,
Thank you for the solution,

May I ask you more question.
If I want to use DepthImageFlattening as a second lidar, what I should do?
FlatscanViewer2 seems to be empty.
And sometimes an error is occurring
2020-07-07 15:24:46.331 PANIC engine/alice/components/PoseTree.cpp@84: Cannot set the transformation lidar_2_lattice_T_lidar_2 at time 1.605370


This is my configuration

{
      "source": "main_hardware.interface/subgraph/realsense_flatscan",
      "target": "delivery.interface/subgraph/flatscan_2"
},

{
      "source": "interface/subgraph/flatscan_2",
      "target": "navigation.subgraph/interface/flatscan_2_for_obstacles"
},
{
      "source": "interface/subgraph/flatscan_2",
      "target": "navigation.subgraph/interface/flatscan_2_for_localization"
}
"navigation.localization.global_localization": {
    "GridSearchLocalizer": {
      "tick_period": "250ms",
      "use_second_flatscan": true
    }
},

Hi Max,
Looks like your two flatscan inputs are all setup fine. In isaac 2020 the navigation system was extended to handle 2 lidars, the error you are getting is because normally for each lidar device you would set a pose in the pose tree (you can see the pose tree in sight), the pose lets isaac know how the lidar is oriented and positioned relative to the robots body. The error message is saying that it cannot find this pose for the second lidar, obviously because this is in fact the camera and not a second lidar. What it’s looking for is for you to have a node called lidar_2 and that node to then have a pose in the pose tree between the lidar2 and lidar_2_lattice. The lidar_2_lattice node should have already been generated by the local planner so should be visible in sight. At this point I am guessing a bit more as to how you can fix this, but I would try renaming your realsense_camera node to lidar_2. This may still not work as it depends where in the navigation stack it wires up the lidar_2 to the lidar_2_latice, so you may have to use a PoseInitializer to create this yourself if it’s not being generated.

I would say that so far as I understand the way that the navigation stack works, it was probably not originally set-up to cater for your scenario, however I cannot see any reason why it should not be possible to get it to work. I would maybe think about if you actually need to have both inputs for localisation, as this will probably have an impact on performance.

That’s probably as far as I can help with that one, but I hope you can work that out, or someone else can help further.