How to access a node from my Custom Writer or Custom Annotator?

I am trying to find aspects of the simulation such as the distance of the object from the camera, the rotation of the object respective to the camera, and the illumination settings. I am unsure how to access this information from my custom writer - can a writer access an information bundle by itself or does it need a custom annotator?

This is my AutoFunc node, I extract the prims locations from their names and compute the distance

# The AutoFunc decorator will create a new OmniGraph node with the specified inputs/output
# See https://docs.omniverse.nvidia.com/py/kit/source/extensions/omni.graph/docs/autonode.html
# For typing documentation https://docs.omniverse.nvidia.com/kit/docs/omni.graph.docs/latest/concepts/AutoNode.html
@og.AutoFunc(module_name="omni.replicator")
def ComputeDistance(category:str, numSamples:int = 1) -> og.Bundle:
    """Based on https://forums.developer.nvidia.com/t/how-to-get-current-frame-number-in-replicator/216711/6

    Args:
        category (str): _description_
        numSamples (int, optional): _description_. Defaults to 1.

    Returns:
        og.Bundle: _description_
    """
    # camera_pos:og.Int3, bracket_pos:og.Int3, 
    # Note 1: numSamples input currently required
    # Note 2: Only bundle output currently supported, this will be expanded in the future.

    # Use global to have access to a persistent `frame` variable
    global FRAME
    print("FRAME", FRAME)
    FRAME += 1
    
    stage = omni.usd.get_context().get_stage()
    prim = [x for x in stage.Traverse() if category.capitalize() in x.GetName()][0]
    matrix: Gf.Matrix4d = omni.usd.get_world_transform_matrix(prim) # from /opt/nvidia/omniverse/kit-sdk-launcher/extscore/omni.usd/omni/usd/_impl/utils.py
    bracket_pos: Gf.Vec3d = matrix.ExtractTranslation()
    print("BRACKET POSITION*****", matrix, bracket_pos)

    prim = [x for x in stage.Traverse() if "Camera" in x.GetName()][0]
    matrix: Gf.Matrix4d = omni.usd.get_world_transform_matrix(prim) # from /opt/nvidia/omniverse/kit-sdk-launcher/extscore/omni.usd/omni/usd/_impl/utils.py
    camera_pos: Gf.Vec3d = matrix.ExtractTranslation()
    print("CAMERA POSITION*****", matrix, camera_pos)    
 
    x1, y1, z1 = camera_pos
    x2, y2, z2 = bracket_pos

    distance = int(math.sqrt((x2 - x1)**2 + (y2 - y1)**2 + (z2 - z1)**2))
    
    print("DISTANCE", distance)
    
    # TODO try writing info here
    # or make distance global and access from the writer
    
    bundle = og.Bundle("return", False)
    bundle.create_attribute("values", og.Type(og.BaseDataType.INT, 1)).value = distance
    return bundle


# This will allow the AutoFunc return attribute `out_0` to be automatically connected to the pose node's `values` input
rep.utils.ATTRIBUTE_MAPPINGS.add(rep.utils.AttrMap("outputs_out_0", "inputs:values"))

# Register functionality into replicator
def compute_distance(category:str):
    return rep.utils.create_node("omni.replicator.ComputeDistance", category=category)
rep.randomizer.register(compute_distance)

Looking through the annotator registry files I see that an annotator is registered with a node

AnnotatorRegistry.register_annotator_from_node(
    name="semantic_segmentation",
    input_rendervars=[
        NodeConnectionTemplate("InstanceSegmentationSync", attributes_mapping={"outputs:execOut": "inputs:exec"}),
        "InstanceSegmentationSDExportRawArray",
        "InstanceMappingPtr",
        "InstanceMapping",
    ],
    node_type_id="omni.replicator.core.SemanticSegmentation",
    output_data_type=np.uint32,
    output_channels=1,
    output_is_2d=True,
)

How should I access the output of my node omni.replicator.ComputeDistance in the writer?

Unfortunately, because the annotators are executed asynchronously from the simulation, accessing the USD directly from an annotator can lead to issues where the annotator accesses data out of sync with the rest of the ground truth. There’s currently no easy way to get around this, but we are working on a better solution that will guarantee correct results.

For your purposes though, I can suggest an alternative: In your custom writer, add the bounding_box_3d and camera_params annotators. Those will provide you with the world-space transforms of both any semantically labelled objects and of the current camera. From there a simple matrix multiplication will give you the transforms in camera-space. Please reply back if you run into any issue with this path and I’ll be happy to give you a more concrete example to try.