Ray payload in OptiX7

Why do we have to pack the pointer into a couple of ints to pass it into the trace program? Also I am confused by the coordinate system used.

I am using OptiX for research purpose and want to shoot rays from a single point source instead of every pixel from the screen. Is there an in-built method to pass a random direction for each ray?

The parameters in to trace, and attributes out of trace, are put into 32 bit registers. Those registers are a precious resource, so the API is making it easy to see & understand how many you are using. Since pointers are 64 bits, they need to be passed using 2 registers.

I’m not sure about the coordinate system you’re referring to, are you looking at one of the SDK samples?

There isn’t a function in OptiX to generate random ray directions, you will want to carefully control both your random number generation and your mapping of random numbers to directions. The OptiX samples, however, do have examples of some ways you can do this that you can copy. Take a look at the samples_exp/optixPathTracer sample. Inside __closesthit__radiance() in optixPathTracer.cu is code to generate two random floats, and map them to the unit hemisphere with a cosine weighting, and then transform that sample to an arbitrary normal using an orientation given by the Onb() function. That’s probably a bit more complicated than what you’re asking for, but depending on what you need, your process for generating random ray directions might look somewhat similar.


David.

Does the code from optixPathTracer generate a random direction in a spherical or just a hem-spherical area?

How is x,y,z defined in OptiX?

optixPathTracer generates cosine weighted hemisphere samples for diffuse reflections. If you want uniform spherical samples, for example, the process would be a bit different & simpler.

You define your coordinate system, OptiX doesn’t.


David.

Instead of launching rays from every pixel on the screen, if we want to launch all rays from a single point inside the geometry, how do we go about it?

You can organize your rays any way you like using your raygen program. To originate them all from a single point in space, set the ray origin for all rays to be the point you want to use, and for each ray pick a ray direction however you’d like. You can use the launch index to compute your ray direction, or you can ignore the launch index. Note that originating rays from a single point is what everyone does already when simulating a linear pinhole camera; starting them all from a single point is the most common approach. The only difference would be direction, and it sounds like you want either a spherical “screen” to calculate ray directions, or random spherical directions. You can do either very easily in OptiX. If what you want is a uniform random spherical direction, try searching for “uniform point on sphere” and you’ll find a whole bunch of math and code examples you could use in your raygen program.


David.

If you ask how the OptiX examples do it, the answer is world coordinates are right-handed, y-axis is up, triangle winding is counter-clockwise for front-faces, just like OpenGL’s defaults.

Pinhole camera coordinates UVW are left-handed, launch index (0, 0) is bottom-left. (There is a diagram in the presentations below.)

Helpful links to documentation, presentations and example code:
https://devtalk.nvidia.com/default/topic/1062536/optix/ray-contact-points-for-objects/post/5381422/#5381422

Here are some related forum threads about shooting rays from arbitrary points, in these cases for texture baking.
https://devtalk.nvidia.com/default/topic/1029449/optix/baking-to-texture/post/5236574
https://devtalk.nvidia.com/default/topic/923772/optix/query-on-optix-baking-texture-data-storage-basic-query-/post/4861259