Synthetic data generation 2d/3d labels on own customized object

Hello everyone!
I don’t know if it is an already discussed topic but I’m a newbie to Omniverse Isaac Sim and I’m looking for a help.
Let me describe my goal.
I used to work with Pytorch to train a neural network based on RGB images that contain an object and the 2D keypoints (labels) of the object in the image (manually labeled). These labels are referred most of the time to the vertices of the object (in 3D world in object coordinates system).

I converted my STL file to an OBJ file and then to USD file using the script in NVIDIA documentation to import my object in the Isaac Sim environment.

Then I used a python example script that I got from NVIDIA documentation/tutorials (and modified a bit to work with my environment) that uses Replicator to create some data like:

  • RGB images
  • NPY files containing information of 2D and 3D bounding box

  • I used the BasicWriter registry with the parameters:
  • rgb = True
  • bounding_box_2d_tight = True
  • bounding_box_3d = True
  • camera_params = True
  • pointcloud = True

So I now have two questions:

  1. In order to acquire the images I’ve create the camera object, but I would like to use the stage light instead of the camera light. Cause the camera light is very strong (since I want to capture only 1 single small object in the view), is it possible, during the data generation, to “turn off” the camera light and use only a light that I create inside the Python script?
  2. Is it possible to create NPY 2D and 3D data of vertices instead of bounding boxes for my custom object? So at the end I would like to perform keypoints detection and not object detection only.

Thank you very much!!

Solved!

Awesome! Could you post your solution and mark it solved so others may benefit from it as well? Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.