Hey, is there any way to get a vectorized tensor of readings from rtxlidars from cloned robots?
For context, I am trying to create a reinforcement learning task based on the OmniIsaacGymEnvs repo. I would like to train the RL model directly on the pointcloud from the LIDAR.
For that I have attached a rtxliadr to the robot, and now I need to read the LIDAR pointcloud from all robots and create a tensor for the observations in the training loop. Currently, I’m getting the data with an annotator like in this tutorial by iterating over each cloned robot and concatenating it into a tensor. This is unwanted in RL tasks since it is rather slow. Is there any other interface or wrapper to directly get a vectorized tensor like with Positions and Orientations in ArticulationViews?