Hello,
I am trying to add augmentations to my rendered images while collecting object detection and object segmentation data. My workflow involves rendering images of assets from a .usd
file, and I am able to get the rendered images as expected. However, I am struggling to apply the augmentations mentioned in the augmentation examples from the documentation. Before the following snippet, I basically created warehouse, pallet and forklift prims. And then the code follows:
Here’s a snippet of my code:
with rep.new_layer():
# Add a Driver View Camera
driver_cam = rep.create.camera(
position=(-2.0, -2.0, 2.0), # Initial camera position
look_at="/World/Pallet", # Camera target
name="DriverCam"
)
# Add a Top View Camera
top_view_cam = rep.create.camera(
position=(0.0, 0.0, 5.0), # Initial camera position
look_at="/World/Pallet", # Camera target
name="TopCam"
)
# Attach Render Products for Cameras
driver_rp = rep.create.render_product(driver_cam, resolution=(512, 512), name="DriverView")
top_rp = rep.create.render_product(top_view_cam, resolution=(512, 512), name="TopView")
rps = [driver_rp, top_rp]
# Register the AugPixellateExp augmentation
rep.AnnotatorRegistry.register_augmentation(
name="AugPixellateExp",
augmentation=rep.annotators.Augmentation.from_node(
"omni.replicator.core.AugPixellateExp",
kernelSize=2 # Adjust the kernel size as needed
)
)
# Retrieve the 'rgb' annotator and apply the AugPixellate augmentation
rgb_annotator = rep.AnnotatorRegistry.get_annotator("rgb")
rgb_annotator.augment("AugPixellate")
print("PRINTING")
# Add Semantic Segmentation
writer = rep.WriterRegistry.get("BasicWriter")
writer.initialize(
output_dir="/home/adityanisal/_output",
rgb=True,
semantic_segmentation=True,
bounding_box_2d_tight=True,
instance_segmentation=True,
)
writer.attach([driver_rp, top_rp])
# Trigger Frame Captures with Randomized Camera Positions
print("[Debug] Triggering frame captures with randomized camera positions...")
with rep.trigger.on_frame(num_frames=10): # Generate 10 frames
with driver_cam:
rep.modify.pose(
position=rep.distribution.uniform((-3.0, -3.0, 2.0), (0.0, 0.0, 4.0)), # Randomize position
look_at="/World/Pallet"
)
with top_view_cam:
rep.modify.pose(
position=rep.distribution.uniform((-6.0, -6.0, 4.0), (0.0, 0.0, 6.0)), # Randomize height slightly
look_at="/World/Pallet"
)
print("[Debug] Running the orchestrator...")
rep.orchestrator.run()
print("[Debug] Cameras added and orchestrator run successfully.")
# Finish
print("[Debug] Scene setup and execution complete.")
# Wait for the data to be written to disk
rep.orchestrator.wait_until_complete()
simulation_app.close()
Questions:
- Can you provide the latest documentation or updated resources relevant to applying augmentations in the Replicator SDK? The current documentation linked above seems outdated or not working. It lets my code run but does not apply any augmentation.
- How can I correctly apply augmentations (e.g., pixellation or similar effects) to my rendered images using the updated Replicator SDK? If possible, please provide examples or edits to my code that align with the latest practices.
- The augmentation AugPixellateExp, referenced in the documentation, does not seem to exist in the current version of the SDK. How can I achieve pixellation and similar augmentations using the latest features? Are there equivalent methods or nodes available?
- Could you help me understand the flow for integrating augmentations into the pipeline?
- How should the augmentation be registered and linked to annotators or render products?
- Does the writer automatically process augmentations applied to annotators, or are there additional steps needed?
Any help would be greatly appreciated.