The title pretty much sums it up. The docs say that “The instance id is assigned in a way that each of the leaf prim in the scene will be assigned to an instance id, no matter if it has semantic labels or not.”
I think I’m missing something very obvious here, but how do you know which instanceId belongs to which prim?
For context, I’m using cam prims and diverse annotators and get_current_frame() to get occlusions, but I have a hard time connecting which occlusion ratio belongs to which prim.
I came across idsToLabels in info part of the instance_id_segmentation annotator output , but get dubious mapping where indices are matched to “INVALID” strings: {'6': 'INVALID', '7': 'INVALID', '20': 'INVALID', '14': 'INVALID'}
From inspecting the values, it seems it is the exact order in which the prims were instantiated.
However, I would like to get a clear connection between path and id.
The reason being that I need to sort through outputted occlusion ratios in the frame and match them to their corresponding prim.
Update to this post: I can definitively confirm that the instanceId is not necessarily encoding the order of instantiation, as I’ve preserved that order elsewhere and yet for a larger number (say >5) of instances this seems to change and the order is lost. And neither is the unsorted list of instances given by get_current_frame() as is. Tested both possibilities in code as a workaround, alas neither worked.
Hello Mr. Haidu,
thank you for taking the time to answer and craft a working demonstrator just for this post! Studying the examples given for the annotators I started suspecting the lack of labeling was the cause.
However the documentation’s example using the replicator require me to redesign my code quite a bit, since I’ve built a so called Standalone App, using the templates provided by Nvidia Isaac.
I realize now (if I understood you correctly) that both yours and the doc examples were intended to be used with Omniverse Kit’s Script Editor. As someone developing the standalone app, this was entirely unclear to me. Your casual mentioning what the example was for made it click.
In the meanwhile I have been busy looking for a workaround realizing that it is sufficient to match occlusion ratio with the pose of the object, which is exactly what the 2d/3d bbox annotator delivers (as per your suggestion)
However, adding this Annotator to the frame of the Camera prim also leaves me with an entirely empty list.
It is exactly the same question I’m posing now, but with the 3d-bbox-annotator (not the 3d pointcloud annotator as with the post), my code looks quite similar as I am also making a standalone app, structure-wise they could be considered identical.
The simple solution in order to get some output from the bbox annotator once all is said and done, is to use the replicator and imbue each prim (we are interested in) with a semantic label (copied from the above post’s winning answer):
import omni.replicator.core as rep
semantic_type = "class"
semantic_label = "cone" # The label you would like to assign to your object
prim_path = "/World/Cone" # the path to your prim object
rep.modify.semantics([(semantic_type, semantic_label)], prim_path)
Almost forgot to check back in; As suspected Using the replicator didn’t work for me, as it would have required to rewrite too much, but adding a single line of code to my script where I add prims did the trick: