Hello Omniverse community,
Can you provide guidance on how to generate synthetic data for creating a fire flow? Is it possible to utilize the usda format, as shown below?
FIRE = "omniverse://localhost/NVIDIA/Assets/Extensions/Samples/Flow/105/presets/Fire/Fire.usda"
fire = rep.create.from_usd(FIRE)
When I run replicator, fire flow does not come up. omni.flowusd
extension is enabled.
Thanks!
Best regards,
Shakhizat
Hi @shahizat Currently there’s no way to segment flow effects. However this isn’t the first time its been requested, and we do have this on the roadmap to implement. Unfortunately I can’t give a date for this yet. I am however looking into whether there’s a workaround for this, but nothing to report yet.
Hi @pcallender, thank you for your reply. If someone is interested,[here] I implemented an automated image labeling using Grounding Dino zero shot approach for flow effects(Fire Detector Using Omniverse Replicator And Arduino Nicla - Hackster.io).
Wow, extremely impressive work! Definitely sharing this with my Replicator colleagues.
I’m circling back, because I have some positive updates here. Nothing as good as a full solution, but others have asked about flow with SDG and there’s some tricks that could unblock people:
Flow has a compositor toggle, that can turn off the rendering of flow objects in a view. This can be done programmatically with a carb setting.

import omni.replicator.core as rep
import carb
import carb.settings
rep.settings.carb_settings("/rtx/flow/compositeEnabled", True)
What could be done, is to toggle this on and off for RGB, and in a post process step using a python script, you could do a pixel difference to build a mask. At some point I’ll have an example for the above, but posting this as it may help others looking for any way to do this, at least until we have a proper in built segmentation feature.