I’m trying to implement a node which emulates the image detection code on my robot. I’d like to be able to input a CameraPrim and somehow get an RGB image that I can pass to another custom node which contains my image detector. The output from that detector node will be the [x,y] coordinates of the thing it detected, which will be passed to nodes further down the line.
I’ve figured out that I probably need to use a Render Product Path node and feed that to something else, but I’m unclear on how I can convert that path to an in memory image that I can pass to OpenCV (import cv2) or numpy.
I’ve looked at the source for the OgnRO2CameraHelper.py, but I’m struggling to figure out where the frame buffer is.
That system looks like it has been designed for producing on-disk datasets for AI training. I have a different problem: realtime-aquisition of camera imagery for use in a physics simulation. I’m still not seeing where the realtime access happens?
Taking a round trip to disk would introduce unacceptable latency. Ideally, I’d like to be able to use camera frame N-1 in the execution loop of the current PushGraph execution.
Perhaps I’m misunderstanding what the SyntheticData.NodeConnectionTemplate does. Do you have an example of a node that reads the output of the synthetic data into a memory buffer?
Edit: To provide some context, I have a control loop that is Camera → Framebuffer → Object Position and Velocity Estimation from Image → Servo Control Output Signal, the control algorithm is highly timing dependent on the latency between the Camera capture and the Object q+q_dot estimation.
Here is an example of accessing rgb data using annotators, you can even keep the data in the gpu’s memory if accessed using the “cuda” argument: rgb_annot.get_data(device="cuda")