Hi all, I’ve looked around forums and can’t really find anything on the topic. Is there a way to make a monochrome camera in Isaac sim, potentially for use with synthetic data generation?
Thank You!
Hi all, I’ve looked around forums and can’t really find anything on the topic. Is there a way to make a monochrome camera in Isaac sim, potentially for use with synthetic data generation?
Thank You!
Hi Kraig,
I don’t think the simulator uses any spectral performance of materials so all your light reflects the same regardless of the wavelengths in real life. So the only way I think you can truly model it is to use the spectral performance curve of te camera, try to get the same wavelength of light in the simulator and apply the spectral efficiency as an estimate to the pixel intensities you get taking an image, while still ignoring how different wavelengths behave with different materials.
Maybe just try using a white light, get a grayscale of it and then compare it to y our actual setting to see how it differs.
Good luck :)
Hi @kraig9 sorry for my late reply.
Isaac Sim doesn’t have a built-in “monochrome camera” mode, but there are a few practical approaches to get single-channel greyscale output, depending on your workflow:
The rgb annotator outputs RGBA (uint8, shape H×W×4). You can convert to greyscale in your capture loop with a standard luminance formula:
import numpy as np
# After getting RGBA data from the annotator
rgba = rgb_annotator.get_data() # shape (H, W, 4), uint8
grey = np.dot(rgba[:, :, :3].astype(np.float32), [0.2989, 0.5870, 0.1140]).astype(np.uint8)
# grey is now shape (H, W), single-channel
This uses the ITU-R BT.601 luminance weights, which is how most real monochrome sensors effectively integrate visible light.
If you’re using Omni Replicator for synthetic data generation, you can register a custom augmentation that converts RGB to greyscale directly in the pipeline. This works with both NumPy (CPU) and Warp (GPU):
NumPy version:
import omni.replicator.core as rep
import numpy as np
def rgb_to_greyscale_np(data_in):
grey = np.dot(data_in[:, :, :3].astype(np.float32), [0.2989, 0.5870, 0.1140])
result = np.stack([grey, grey, grey, data_in[:, :, 3].astype(np.float32)], axis=-1)
return result.astype(np.uint8)
rep.AnnotatorRegistry.register_a ugmentation(
"rgb_to_greyscale",
rep.annotators.Augmentation.from _function(rgb_to_greyscale_np)
)
# Create an annotator with the augmentation applied
grey_annotator = rep.AnnotatorRegistry.get_annota tor("rgb")
grey_augm = rep.AnnotatorRegistry.get_augmen tation("rgb_to_greyscale")
grey_annotator.augment(grey_augm )
grey_annotator.attach(render_pro duct)
Warp (GPU) version for better performance:
import warp as wp
@wp.kernel
def rgb_to_greyscale_wp(data_in: wp.array3d(dtype=wp.uint8), data_out: wp.array3d(dtype=wp.uint8)):
i, j = wp.tid()
r = wp.float32(data_in[i, j, 0])
g = wp.float32(data_in[i, j, 1])
b = wp.float32(data_in[i, j, 2])
grey = wp.uint8(0.2989 * r + 0.5870 * g + 0.1140 * b)
data_out[i, j, 0] = grey
data_out[i, j, 1] = grey
data_out[i, j, 2] = grey
data_out[i, j, 3] = data_in[i, j, 3]
grey_augm = rep.annotators.Augmentation.from _function(rgb_to_greyscale_wp)
If you want the greyscale images saved directly to disk (e.g., for training data), you can register a custom Writer that captures the rgb annotator data and writes single-channel PNG files using PIL:
from PIL import Image
# In your writer's write() method:
grey_img = Image.fromarray(rgba_data[:, :, :3]).convert("L")
grey_img.save(f"frame_{i:04d}.png")
As Ricardo mentioned, Isaac Sim’s RTX renderer doesn’t model spectral wavelength behavior — all materials reflect a single broadband “white” illumination. So the greyscale you get from a luminance conversion of the RGB output is a reasonable approximation for most machine vision use cases, but it won’t capture wavelength-dependent effects like IR sensitivity differences between materials. If you need that level of fidelity, you’d need to manually adjust material albedos to match the spectral response curve of your specific sensor, which is a much more involved process.
For most SDG workflows (object detection, segmentation, pose estimation training), the luminance-based conversion above should be perfectly sufficient.
Hope this helps!
I will also create an internal ticket (feature request) about this!