Isaac Lab camera sensor with manager based environment or direct workflow?

I’m going through the Isaac Lab tutorials and modifying them slightly to try and create a RL task with a Jetbot. Should I be using the direct workflow (Creating a Direct Workflow RL Environment — Isaac Lab documentation) if I want observations to include camera data? Is there a way to use the manager based approach? I noticed the mdp submodule (omni.isaac.lab.envs.mdp — Isaac Lab documentation) there didn’t seem to be any camera related functions and in the adding sensors tutorial (Adding sensors on a robot — Isaac Lab documentation) they didn’t seem to use the manager based workflow of creating an ObservationCfg type class.

Looking for guidance on what best practices would be, thanks.

1 Like

+1 on this!

1 Like

Just to follow up, I found this direct workflow example of a cartpole environment that uses a camera (IsaacLab/source/extensions/omni.isaac.lab_tasks/omni/isaac/lab_tasks/direct/cartpole/cartpole_camera_env.py at main · isaac-sim/IsaacLab · GitHub). I haven’t tried to adapt it yet, but putting it out there in case it’s helpful and will update if it’s a success.

This did end up working and you can see the example here JetbotDeepRL/jetbot/jetbotenv.py at main · ih/JetbotDeepRL · GitHub

I’d say the main difference from the Cartpole example is using a CameraCfg instead of the TiledCameraCfg and slicing out the alpha channel in _get_observations

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.