Is there a way to render standard annotators (eg: semantic_segmentation) with the render product using MovieCapture? I tried duplicating one of the PathTracing annotators in /Render/Vars and adjusting the parameters but nothing helped. I’m using this as reference for the names Annotators Information — Omniverse Extensions latest documentation
Also, is there a way to render AOVs using the RTX RealTime?
Thanks!
AOVs are only for Path tracing
Thank you for the answer!
What would be the best practice to render out a sequence of annotators (semantic/instance segmentation, bboxes)? I’ve been using the replicator writer but wasn’t successful in attaching it to an existing camera. It also adds post processing for converting the .npy files to images.
@didiersurka i am just another user, but i am curious about the reason why the writer isn’t getting you the proper output. in your code, before initializing the writer, did you create a render product (assuming you are using replicator for the bulk of your work)?
also, were there any errors in the terminal when attaching the annotators to the writer? or is the intent solely for rendering passes out?
Not using replicator for any of the randomization, just python API. I don’t have the code here so can’t copy/paste. I have a render product and I can get the annotators when using the replicator camera, in RT and PT. But my issue si that I need to get the output from an animated scene camera and the writer is just not accepting the existing camera prim.
I just though of something I haven’t tried; to update the pose of the replicator camera by copying the animated camera at each replicator step.
Hi @didiersurka I’m unsure about use with MovieCapture, but all you need to do to hook up an existing camera in your Replicator script is something like this:
camera = rep.get.prim_at_path('/World/Camera')
Full script below:
I am using IsaacSim 2023.1.1 since it has the most recent rep version, but I think this ought to work in Code as well. Let me know if you have difficulties.
Don’t forget to make sure there’s a camera at '/World/Camera'
for this script.
import omni.replicator.core as rep
# Scene settings, different for isaacsim vs code
rep.settings.set_stage_up_axis("Z")
rep.settings.set_stage_meters_per_unit(1)
#Assign the preexisting camera
camera = rep.get.prim_at_path('/World/Camera')
# Set the renderer to Path Traced
rep.settings.set_render_pathtraced(samples_per_pixel=64)
# Create the render product
render_product = rep.create.render_product(camera, (1920, 1080))
#Create a cone cause why not
cone = rep.create.cone(semantics=[('class', 'cone')], position=(0,0,0), scale=1)
distance_light = rep.create.light(rotation=(400,-23,-94), intensity=10000, temperature=6500, light_type="distant")
# Initialize and attach writer
writer = rep.WriterRegistry.get("BasicWriter")
writer.initialize(output_dir="_attachcam", rgb=True, normals=True, distance_to_image_plane=True, semantic_segmentation=True)
writer.attach([render_product])
# Render 3 frames, with 50 subframes
rep.trigger.on_frame(num_frames=3, rt_subframes=50)
I am interested in your usecase. Do you want to use the segmentations as masks for compositing for film/video?