Getting Camera Position

I am trying to get the position of prims on my stage using this script: Get the Local Space Transforms for a Prim — Omniverse Kit documentation

image

It works, but the problem is that i always get the position (0, 0, 0) although I created the camera at (0, 0, 1200).

image

Am I missing something here or is there another way to get the position of the camera?

Thanks, best regards,
Julian Grimm

I think my issue is related to this behavior: Can't obtaing transform matrix applied by rep.modify.pose - #2 by dennis.lynch
However if I add og.Controller.evaluate_sync() i get the error “Controller.evaluated was never awaited”.
Maybe someone has a solution for this?

Hi @julian.grimm and thanks for reaching out! When you run the with rep.trigger.on_frame block, what you’re actually doing is preparing a graph - but that graph hasn’t run yet. That’s why when you retrieve the transform, you get a translation of (0,0,0).

You can run the graph for one step as follows:

import omni.replicator.core as rep
import omni.usd
import asyncio

async def main():
    with rep.trigger.on_frame():
        rep.create.camera(position=(0, 0, 1200))
    
    await rep.orchestrator.step_async()
    
    stage = omni.usd.get_context().get_stage()
    camera_prim = stage.GetPrimAtPath("/Replicator/Camera_Xform")
    camera_pose = omni.usd.get_local_transform_SRT(camera_prim)
    
    translation = camera_pose[3]
    print(translation)

asyncio.ensure_future(main())

Thank you for your answer. It worked for me, however I am encountering another issue when i run it ussing step_async(). When I have multiple camera views (therefore having multiple render_products) the rendering gets very slow and I get the warning “Timed out awaiting frame”.

What I am trying to do is to render a part in multiple positions for training a classification network. Some specific positions belong to one class and other positions belong to another class, so the rendered images should be saved into seperate folders. Here is the script that I have been using. Maybe someone knows a more better way to achieve this.

import omni.replicator.core as rep
import omni.usd as usd
import omni.kit.commands as commands
import asyncio
import time
import os

# Absolut path to the working directory
script_path = os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir))
# path_on_pc = script_path
path_on_pc = 'D:\\'

# Camera Parameter
# Fstop ist um den Faktor 10 falsch!!! ==> Fstop real = 4 ==> 0.4 in Omniverse
# Camera1
focal_length1 = 25
focus_distance1 = 1200
f_stop1 = 0.4
pixel_resolution1 = (2448, 2048)
horizontal_aperture1 = 8.5

# Camera2
focal_length2 = 25
focus_distance2 = 1500
f_stop2 = 0.9
pixel_resolution2 = (2448, 2048)
horizontal_aperture2 = 8.5

# Number of Images to Render per Dataset
num_images = 1500

# Get the current stage
stage = usd.get_context().get_stage()

# Set render Engine to Pathtraced
rep.settings.set_render_pathtraced(128)
# rep.settings.set_render_rtx_realtime()
rep.set_global_seed(68)

# Set the Z-Axis up
rep.settings.set_stage_up_axis('Z')

# Set the unit to meter
rep.settings.set_stage_meters_per_unit(1.0)

# Create and Register the writer Object
writer1 = rep.WriterRegistry.get('BasicWriter')


async def run_replicator(output_path):
    with rep.new_layer():
        # region Create Objects and apply materials
        # Create the Camera1
        draufsicht = rep.create.camera(
            position=(0, 0, 1200),
            rotation=(0, -90, 0),
            focal_length=focal_length1,
            focus_distance=focus_distance1,
            f_stop=f_stop1,
            horizontal_aperture=horizontal_aperture1,
            name='Draufsicht'
        )

        # Create a new render_product (1 for each camera)
        render_product = rep.create.render_product(draufsicht, pixel_resolution1)

        # Create the Camera2
        seitenansicht = rep.create.camera(
            position=(1200, 800, 400),
            rotation=(0, -15, 35),
            focal_length=focal_length2,
            focus_distance=focus_distance2,
            f_stop=f_stop2,
            horizontal_aperture=horizontal_aperture2,
            name='Seitenansicht'
        )

        # Create a new render_product (1 for each camera)
        render_product2 = rep.create.render_product(seitenansicht, pixel_resolution2)

        # Create the floor plane
        floor = rep.create.plane(
            position=(0, 0, 0),
            rotation=(0, 0, 0),
            scale=(1000, 1000, 1000),
            semantics=[('class', 'floor')],
            name='floor',
        )

        # region Randomizer methods
        # Randomize Part Position and Rotatation
        def move_Part(minPosition, maxPosition, minRotation, maxRotation):
            with part:
                rep.modify.pose(
                    position=rep.distribution.uniform(minPosition, maxPosition),
                    rotation=rep.distribution.uniform(minRotation, maxRotation)
                )
            return part.node

        # Randomize the floor material
        def random_Floor_Material():
            floor_material = rep.randomizer.materials(
                materials=rep.get.material(path_pattern="/Looks/Floor/*"),
                input_prims=floor
            )
            return floor_material.node

        # Randomize the part material
        def random_Part_Material():
            part_material = rep.randomizer.materials(
                materials=rep.get.material(path_pattern="/Looks/Parts/*"),
                input_prims=rep.get.prims(semantics=('class', 'part'))
            )
            return part_material.node

        # Randomize Dome Light
        def dome_Light():
            lights = rep.create.light(
                light_type="Dome",
                position=(0, 0, 0),
                rotation=rep.distribution.uniform((0, 0, -180), (0, 0, 180)),
                scale=(1, 1, 1),
                name='HDRI',
                texture=rep.distribution.choice([
                    'omniverse://localhost/Users/grimmjul/HDRIs/artist_workshop_2k.exr',
                    'omniverse://localhost/Users/grimmjul/HDRIs/ZetoCG_com_WarehouseInterior2b.hdr'
                    # 'omniverse://localhost/Users/grimmjul/HDRIs/industrial_pipe_and_valve_01_8k.hdr',
                    # 'omniverse://localhost/Users/grimmjul/HDRIs/workshop_8k.hdr'
                    ])
                )
            return lights.node

        # Randomize the Focus Distance of the camera
        def random_Focus_Distance(camera, MinFocusDistance, MaxFocusDistance):
            with camera:
                rep.modify.attribute(
                    name='focusDistance',
                    value=rep.distribution.uniform(MinFocusDistance, MaxFocusDistance)
                )
            return camera.node

        # Register Randomizers
        rep.randomizer.register(move_Part)
        rep.randomizer.register(random_Floor_Material)
        rep.randomizer.register(random_Part_Material)
        rep.randomizer.register(dome_Light)
        rep.randomizer.register(random_Focus_Distance)
        # endregion

        # Initialize and attach writer
        writer1.initialize(
            output_dir=path_on_pc + output_path,
            rgb=True,
            semantic_segmentation=True,
            instance_segmentation=False,
            instance_id_segmentation=False,
            distance_to_image_plane=True,
            normals=True,
            motion_vectors=False
        )

        writer1.attach([render_product, render_product2])

        # Trigger the randomizer at each frame
        with rep.trigger.on_frame():
            if output_path == '\\output\\IO':
                rep.randomizer.move_Part((-95, -46, 0), (95, 46, 0), (0, 0, -10), (0, 0, 10))
            else:
                rep.randomizer.move_Part((-110, -61, 0), (110, 61, 0), (-5, -5, 20), (5, 5, 340))
            rep.randomizer.random_Floor_Material()
            rep.randomizer.random_Part_Material()
            rep.randomizer.dome_Light()
            rep.randomizer.random_Focus_Distance(draufsicht, focus_distance1 - 200, focus_distance1 + 200)
            rep.randomizer.random_Focus_Distance(seitenansicht, focus_distance1 - 50, focus_distance1 + 50)

        # Start Orchestrator ==> Start the rendering procedure ==> Render specified number of frames
        for i in range(num_images):
            # Render frame by frame
            print("Rendering Image" + str(i).zfill(4) + " for Dataset: " + output_path.split('\\')[-1])
            await rep.orchestrator.step_async()

            # Printing Camera Pose
            # camera_prim = stage.GetPrimAtPath('/Replicator/Draufsicht_Xform')
            # camera_pose = usd.get_local_transform_SRT(camera_prim)

            # translation: Gf.Vec3d = camera_pose[3]
            # print('camerapose:')
            # print(translation)

        # After rendering all images ==> stop orchestrator
        rep.orchestrator.stop()


async def main():
    paths = ['\\output\\IO', '\\output\\NIO']
    total_time = 0

    load_materials()

    # Start the rendering job for each output path
    for path in paths:
        dataset_name = path.split('\\')[-1]
        print("Starting Replicator for Dataset: " + dataset_name)
        start_time = time.time()

        # Run Replicator Script
        await run_replicator(path)

        render_time = time.time()-start_time
        total_time = total_time + render_time
        print(f'Done in {render_time: .2f} secs for Dataset: {dataset_name}')

    print(f'Finished Rendering Job in {total_time: .2f} secs')


asyncio.ensure_future(main())

Hello @julian.grimm, while I haven’t seen this error before, I think I can help on a few things that perhaps will work to avoid the issue.

To capture two different views and save to different directories, I recommend simply creating two writers. What you’re doing by calling run_replicator in a loop is essentially re-creating and re-running the generation from scratch for each output path. This may be needed if you are seeing yourself run out of available VRAM, but otherwise it will be more efficient to capture in a single run.

        # Initialize and attach writer
        writer1.initialize(
            output_dir=os.path.join(path_on_pc, "output/IO")
            rgb=True,
            semantic_segmentation=True,
            distance_to_image_plane=True,
            normals=True,
        )
        writer2.initialize(
            output_dir=os.path.join(path_on_pc, "output/NIO")
            rgb=True,
            semantic_segmentation=True,
            distance_to_image_plane=True,
            normals=True,
        )

        writer1.attach(render_product)
        writer2.attach(render_product2)

The next point is more for performance. Rather than step_async and a for-loop, you can simply set your trigger with the number of activations you want and then tell Replicator to run until completion. Replicator will capture every frame until all triggers have reached their maximum number of activations. We’re planning a tutorial in the near future to detail what happens behind the scenes and why this is more efficient so that these details are less obscure.

with rep.trigger.on_frame(num=num_images):
    ....
await rep.orchestrator.run_until_complete_async()

Please let me know if these tips are helpful. Thanks for reaching out!

Hi @jlafleche , thank you for answering. I already got this addressed by removing one camera view, because it was not absolutely necessary. I used the for loop, because I need to save the images in two different folders, based on the position and the rotation of the objects.

Regarding your other recommendation: In the meantime, I started working on a new project where I implemented your suggestion. I was able to reduce the render time by 0.5s per frame, when rendering 2000 images. I have yet to test it with the project mentioned above.