Hello, we are using omniverse to generate synthetic images for traning. at the moment, despite of training with tens of thousands of synthetic images, we are not seeing improvements with the baseline (even though the base line is not that great). I suspect this is because of the domain gap which at this point could only be the photorealism.
The images do look pretty “toyish” even though we only by the best of assets. I am looking for help/documentation on how to modify the render settings to look more photo-realistic.
any would would be greatly appriciated,
Thank you
personally, achieving photorealism is a combination of things. while using the best assets is a great start, you may find relying on great assets and render settings may not be enough to get the result you are after. other factors such as scene lighting, asset shader/texture work, post process/volumetrics, composition/camera settings, and placement of assets, etc. can all attribute to the overall look. and just as you know, there is no set in stone formula to follow.
that said, you can probably poke around example datasets shipped with OV, such as the Da Vinci’s workshop to see what settings they used. but you should also keep in mind it may take time and require more than one artist to really finesse and push to photorealistiic level. if inclined, perhaps you can mention a bit more about your scene so the mods/devs can offer further guidance.
Hi,
Thank you for the link. I’m not sure why but the da Vinci’s environment does not look so great on my computer.
The render setting are local,

so it would make sence for the usd to render differently on my Omniverse. What render setting should I be looking at to close on the visual gap between my local da Vinci’s and the creative team’s da Vinci’s?
here are some examples of the images I am generating at the moment:





For the lighting I am using sunstudy.
We feel this images are far from photorealism and it’s creating a large domain gap.
Thank you
Please, can someone point us a direction on how to achive photorealism in omniverse?
Maybe it would be easier to help me if I focus specifically on our domain gap and not photorealism.
These are real images we captured with our camera:




and I need to minimize the gap between the images generated with the omniverse camera and these reall images.
Thank you
Hi,
Comparing the sim vs real:
Generally speaking, the real photos have more details, and are far more contrasted.
You can note saturated colors, over-exposed areas in clouds and even on the ground where the sun hits.
In the sim, the horizon line ends the world “abruptly”, while in reality we can see hills, trees and other details, also, real terrain is irregular and in the sim it is flat, resulting in an even-colored green plane of grass.
In the real world, the area close the the camera is very rich in detail, and we can see the grass in composed of more elements like leafs, small pebbles, etc
Also, real photos have a poor image quality.
These are just few things that come to mind.
While what I pointed is doable - It is easier said then done, and may require a vfx/environment artist expert to join the project for consulting and maybe some assistance.
@lbenhorin has made several good points above.
seeing the actual photos does help paint the picture of what you are trying to achieve. real life has a lot more imperfections and variations in general compared to our 3d counterparts. not only applicable to objects in the scene, texture, but also imperfection from the camera. take for example, by introducing new grass patches (or use the same clump of grass) with slightly different color and height could help make it more random and introduce those variations:

another observation regarding the imperfections of the camera such as depth of field, exposure (it would seem the real captures get overexposed during day due to the aperture/fstop), but also obvious vignetting around the frame, which can be easily achieved with the post process setting and chromatic aberration (cyan/magenta fringing) could help make the images match a bit more to the reference. Also, have you tried using HDR/skydome to help produce less constant/even lighting that the sunsky produces in OV? there are resources like HDRIs • Poly Haven where you can try out a few different types of lighting scenario instead of pure clear sky that may create more believable result.
just my 2c
1 Like
Hi guys, thank you for your input.
@lbenhorin we will at some point have to bring in an artist.
@Simplychenable Thank you, will the HDR/skydome work with sunstudy?
More in general, I am trying to understand the render settings in omniverse.
This is how the Da Vinci’s workshop project looks in my Isaac sim.
this is how it should look:
please, can you help me understand what i should do in the render settings in omniverse to achive the latter?
Thank you
You are running it REALTIME mode and not PATHTRACING mode. Try switching to that. Secondly, as mentioned above, Isaac Sim, like the USD Composer template, is capable of incredible photo-realism but you have have experience with Lighting, exposure, color composition, camera work, post effects etc.
- Set it to path tracing mode
- Expand this arrow to actually get all of your rendering settings for camera. Exposure for one
Isaac Sim is really about simulation rather than photo real rendering. If you want to push photo-realism you should go download the latest USD Composer Kit app template from here. GitHub - NVIDIA-Omniverse/kit-app-template: Omniverse Kit App Template