Hello everyone!
I am new to the Nvidia Developer ecosystem, and I am currently working on translating a ray-tracing based simulation built in Unity with compute shaders into Omniverse.
The basic gist of it is very simple ray-triangle intersection tests - now, I did some research but I was not able to figure out which of the APIs actually use the RTX cores, do I need to build a spatial data structure etc, so I wonder, is there any way for me to submit ray queries in Omniverse using Python for the RTX cores? And if yes, how and which API do I need to use?
I played around with omni.kit.raycast.query and omni.physx (which is CPU based if I am not mistaken?)
PS, I am building on top of the Editor app template, and a custom Extension as well.
Hi and thanks for posting. I am not sure I fully understand your questions. If you want to use Omniverse RTX, you can use it directly in a kit based app. You can either use a template like USD Composer or a custom based app. If you are looking to script directly with an RTX API, outside of a kit, that is something in development. Not available now.
But as for any RTX cores and queries, that is all built directly into the RTX renderer in kit. I would not try to “translate a ray-tracing based simulation” from Unity to Kit. I would simply load in your simulation file and use RTX directly.
Hi,
sorry for the confusion. I am far from an expert in the field and I am still exploring my possibilities. But my intent was, to benefit from hardware acceleration for ray-tri tests on my geometry.
Currently the rays are being generated and traced via custom compute shader code (CPU-side BVH, uploaded to the GPU, the rays are generated and traced in shader, basic Moller-Trumbore).
The geometry is “procedural” ie a polymesh loaded from a text file.
My main goal is to improve performance, and one of the methods I had imagined is going from compute to RT cores. Now I may have misunderstood this documentation piece:
But seeing as it mentions RTX Raycast Query, I thought it would allow me to submit work to the RT cores i.e. perform fixed-function ray-triangle tests.
Thank you for your patience and I hope this helps explain it further!
This is not a typical forum question, so I asked our internal AI, and this is what it is telling me.
The most robust path is to treat your Unity app as a prototype of the logic (ray generation, hit filtering, solar metrics) and then rebuild that logic as a Kit extension that uses OpenUSD geometry plus omni.kit.raycast.query for RTX-accelerated ray tests. You do not “convert” the Unity binary directly; instead you export your scene (panels, sun paths, etc.) to USD and then implement the same intersection pipeline in Python inside Omniverse Kit.omniverse.nvidia+2​youtube​
High-level workflow
Export your solar panel geometry and scene layout from Unity to USD via the Omniverse Unity Connector, keeping pivots, transforms, and materials consistent.youtube+1​
Create a new Kit-based app using the Kit App Template, which gives you a Python entry point and extension structure to load the USD stage and run your analysis.youtube​omniverse.nvidia​
Implement your ray generation and analysis as a Python extension that drives omni.kit.raycast.query over the USD stage while the RTX renderer is active.omniverse.nvidia+1​
Using omni.kit.raycast.query
omni.kit.raycast.query exposes an IRaycastQuery interface that runs ray–scene intersection on the current USD stage using the RTX renderer’s acceleration structures. The typical pattern is:omniverse.nvidia+1​
Acquire the interface: raycast = omni.kit.raycast.query.acquire_raycast_query_interface().developer.nvidia+1​
For one-off tests, call submit_raycast_query(ray, callback), where ray holds origin, direction, and max distance, and callback(ray, result) receives hit data (hit position, normal, distance, USD prim path).omniverse.nvidia+1​
For large batches (e.g., many rays over time or per panel), use add_raycast_sequence, repeatedly submit_ray_to_raycast_sequence, and get_latest_result_from_raycast_sequence to poll results efficiently.omniverse.nvidia+1​
This offloads all triangle traversal and BVH work to RTX hardware; result.hit_position, result.normal, and result.get_target_usd_path() replace your Unity ray-triangle tests.developer.nvidia+1​
Porting your Unity logic
To mirror your Unity app’s behavior in Kit:
Reproduce your ray setup (sun direction, panel sampling pattern) as pure math in Python using pxr.Gf vectors, independent of Unity’s API.scratchapixel+1​
Load the solar panel meshes as USD prims, either as a single mesh per panel or as instanced panels, and ensure they are on the active stage before casting rays.digitalmediaworld​youtube​
For each panel or sample point, build a Ray (origin, normalized direction, tMax) and submit it via IRaycastQuery, then post-process the hit results to compute irradiance, shading, or whatever metrics your Unity app currently calculates.omniverse.nvidia+1​
Practical tips and constraints
Omniverse raycast queries operate on the rendered scene, so your USD geometry, transforms, and visibility states must match what you want to test—no hidden collider-only meshes like in some game setups.omniverse.nvidia+1​
If you need physics-style queries (e.g., against collision shapes rather than render meshes), you can also look at PhysX scene queries (PxScene::raycast) integrated into Omniverse, but for solar-panel visibility and shading, omni.kit.raycast.query on RTX geometry is typically the right fit.nvidia-omniverse.github+1​
If you share how you currently generate rays in Unity (per-vertex, per-panel patch, sun path sampling), a concrete Kit-side code sketch for the ray generation and IRaycastQuery usage can be outlined in more detail.
first off I’d like to apologize for the confusion introduced by my unfortunate word choice - “porting” is indeed what I meant, not “translation” :)
So it sounds like omni.kit.raycast.query is indeed what I am looking for.