I am trying to run a task in Isaac Lab with a TiledCamera object with RGB data.
The images I obtain from calling camera_object.data.output[‘rgb’] are all “too dark” and don’t match with what the Isaac Sim GUI shows as the camera output.
It seems as if there is some image post-processing going on.
I tried to make my problem more clear with a small example. It is based on the tutorials/04_sensors/run_usd_camera.py, but with a TiledCamera instead of a Camera object.
And this is the image obtained from camera.data.output[‘rgb’]:
I tried making the images brighter by increasing the light intensity of the scene, but with little effect.
Maybe I’m doing something wrong, but I don’t see any other options I can set in the TiledCameraCfg.
I am facing a similar issue, did you find any solution? @lukas61
I tried TiledCamera and Camera class, The Camera class returns “RGBA” with channels being 4 where as TiledCamera only return “RGB” with channels being 3. I guess its a bug there. The tiled camera also need to return RGBA but somehow A channel is sliced.
Here are the images from TiledCamera as well as Camera. For Camera i used cv2.cvtColor(image[0].numpy(), cv2.COLOR_RGBA2RGB) to convert and then save and it works perfectly fine. where as i think the image in tiledcamera is missing Alpha channel.
Unfortunately, I did not find a satisfactory solution so far. I switched to the Camera class, but the simulation is so much slower with that one, so that it is not really usable for me.
Your theory that the TiledCamera image is missing the Alpha channel is interesting. I found this on the official IsaacLab docs:
"
Attention
Please note that the fidelity of RGB images may be lower than the standard camera sensor due to the tiled rendering process. Various ray tracing effects such as reflections, refractions, and shadows may not be accurately captured in the RGB images. We are currently working on improving the fidelity of the RGB images.
"
Maybe the development of the TiledCamera is not progressed far enough. I could not find any other information regarding this issue.
I’m stuck at this point. I want to use a large number of environments with two cameras each, but the memory isn’t sufficient. It appears that initializing the cameras uses a single thread.