Capturing a Camera Frame and sending it to another Node?

Hi,

I’m trying to implement a node which emulates the image detection code on my robot. I’d like to be able to input a CameraPrim and somehow get an RGB image that I can pass to another custom node which contains my image detector. The output from that detector node will be the [x,y] coordinates of the thing it detected, which will be passed to nodes further down the line.

I’ve figured out that I probably need to use a Render Product Path node and feed that to something else, but I’m unclear on how I can convert that path to an in memory image that I can pass to OpenCV (import cv2) or numpy.

I’ve looked at the source for the OgnRO2CameraHelper.py, but I’m struggling to figure out where the frame buffer is.

1 Like

The helper node activates replicator node templates that capture and pass the data along

https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_replicator.html

one example (from omni.isaac.core_nodes) of a node that converts rgba to rgb, which is later used by the ROS2 image publisher.

rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name)
template_name = rv + "IsaacConvertRGBAToRGB"
template = sensors.get_synthetic_data().register_node_template(
                omni.syntheticdata.SyntheticData.NodeTemplate(
                    omni.syntheticdata.SyntheticDataStage.ON_DEMAND,  # node template stage
                    "omni.isaac.core_nodes.IsaacConvertRGBAToRGB",  # node template type
                    [
                        omni.syntheticdata.SyntheticData.NodeConnectionTemplate(
                            rv + "ExportRawArray",
                            attributes_mapping={
                                "outputs:data": "inputs:data",
                                "outputs:width": "inputs:width",
                                "outputs:height": "inputs:height",
                            },
                        ),
                        omni.syntheticdata.SyntheticData.NodeConnectionTemplate(
                            rv + "IsaacSimulationGate", attributes_mapping={"outputs:execOut": "inputs:execIn"}
                        ),
                    ],
                    attributes={"inputs:encoding": "rgba8"},
                ),
                template_name=template_name,

Thanks for the pointers!

That system looks like it has been designed for producing on-disk datasets for AI training. I have a different problem: realtime-aquisition of camera imagery for use in a physics simulation. I’m still not seeing where the realtime access happens?

Taking a round trip to disk would introduce unacceptable latency. Ideally, I’d like to be able to use camera frame N-1 in the execution loop of the current PushGraph execution.

Perhaps I’m misunderstanding what the SyntheticData.NodeConnectionTemplate does. Do you have an example of a node that reads the output of the synthetic data into a memory buffer?

Edit: To provide some context, I have a control loop that is Camera → Framebuffer → Object Position and Velocity Estimation from Image → Servo Control Output Signal, the control algorithm is highly timing dependent on the latency between the Camera capture and the Object q+q_dot estimation.

@ahaidu to provide a comment

Here is an example of accessing rgb data using annotators, you can even keep the data in the gpu’s memory if accessed using the “cuda” argument: rgb_annot.get_data(device="cuda")