Instance Segmentation not stable

Hello all,

I am using a standalone application to record some synthetic data. In order receive the groundtruth, I use the “get_groundtruth” function from SyntheticDataHelper class.
For example:

gt = self.sd_helper.get_groundtruth(["rgb", "semanticSegmentation", instanceSegmentation"], viewport_api)
instance_segmentation_array = gt["instanceSegmentation"][0]

Everytime, when the simulation is restarted, I realized that the instance_segmentation_array has different amount of pixels that are labeled 0. The label zero is not included in the output array given by get_groundtruth function. So I plotted the zero points whenever the simulation is restarted using this array and matplotlib.pyplot:

instance_copy = deepcopy(instance_segmentation_array) 
instance_copy[instance_copy != 0] = 1  # so that non zero points are labeled as ones for visualization
plt.imshow(instance_copy)
plt.show()

The result is as follows:


Figure_3
Figure_2
Figure_1

As you can see, these zero points change after restart, even when there is no change done either to the scene or to the code.

What could be the reason for that behavior? How do you suggest that we solve the issue? We would actually need stable data between “takes”, so to say.

Thanks in advance!

Sincerely, upyzm

P.S.: I am using the version 2022.2.0.

Hi @upyzm - Is there a reason you are not on the latest Isaac Sim 2022.2.1 release?

Can you try that and let us know if the issue still persists.

Can you provide a short snippet on how you setup and restart your scenario? I would also recommend using Replicator API for getting synthetic data:

Thank you for answering. I attached a code snippet with the issue.
The environment is a reference to: omniverse://localhost/NVIDIA/Assets/Isaac/2022.1/Isaac/Environments/Simple_Warehouse/warehouse_multiple_shelves.usd . A Carter robot is added to scene with an additional custom sensors setup: 3 Cameras and a Lidar at the same position, orientations vary.
I use a standalone application so that I can fuse the groundtruth provided by cameras with the point cloud provided by the Lidar. I wasn’t able to get the point cloud via replicator. Besides, I’d like to do the fusion online, if possible.

@rthaker - The issue persists with the newest release as well. I had not migrated my workspace to the newest version, that was the reason why.

simulation_lidar_snippet.py (6.5 KB)

For getting the point cloud you can use the pointcloud annotator. In the examples you would for example need to change:
rep.AnnotatorRegistry.get_annotator("rgb") to rep.AnnotatorRegistry.get_annotator("pointcloud")

I’ll try that to see if it works for our use case, thanks. Were you able to replicate the problem from the snippet I send you?