I am developing a pipeline for generating synthetic smoke data. While using the particle emitter with the graph editor - I can achieve nice results, but I want to use the Omniverse Flow to create more realistic data. The problem - it seems to be that there is no way to get the annotation data for smoke and fire generated from Omniverse Flow. I have been through all the tutorials in Nvidia’s YouTube and site and couldn’t find any tutorial like this. Please help!
Hi @guydada. Responding here mostly for others, since we talked about this offline. Currently this isn’t possible, but are looking into workarounds and or what the scope of this is with other omniverse teams. When I have more info I’ll follow up with you directly and in this forum post so its searchable.
Hi @guydada, you can do it the same way I did here using zero shot detectors. (Fire Detector Using Omniverse Replicator And Arduino Nicla - Hackster.io). Hope it helps.
That is very helpful! thank you for that first of all. The thing is I am looking for a scalable solution for thousands of different scenes, generated heedlessly. I am in contact with Nvidia and hopefully we will find some solution. I will give your code a try anyhow! thanks again.
@guydada You and I talked about the composite setting recently, but adding the info to this thread for others.
I have some positive updates here. Nothing as good as a full solution, but others have asked about flow with SDG and there’s some tricks that could unblock people:
Flow has a compositor toggle, that can turn off the rendering of flow objects in a view. This can be done programmatically with a carb setting.
import omni.replicator.core as rep
import carb
import carb.settings
rep.settings.carb_settings("/rtx/flow/compositeEnabled", True)
What could be done, is to toggle this on and off for RGB, and in a post process step using a python script, you could do a pixel difference to build a mask. At some point I’ll have an example for the above, but posting this as it may help others looking for any way to do this, at least until we have a proper in built segmentation feature.