Electromagnetic wave simulation using OptiX

Hi everyone,

I am quiet new to OptiX and I have some general questions regarding my purpose.
I want to do a communication scenario simulation using OptiX, which means find the path between a fixed transmitter and receiver in a city environment as in https://on-demand.gputechconf.com/gtc/2014/presentations/S4359-rt-em-wave-propagation-optix-sims-car-to-car-communication.pdf. Therefore, I went through the optix 7.4 programming guide and the examples in optix7course, GitHub - ingowald/optix7course.

Now I think I will use a fixed position point source as my transmitter (start with omnidirectional) with the parameters to describe this transmitter and the receiver as my camera as the first attempt. I can define closest hit and any hit program (shadow ray direct terminated if the path is occluded between transmitter and receiver) to find all the paths between transmitter and receiver. With the texture of the city environment, I can simulate the wave propagation and calculate the final received power for each launch index and visualize the received power intensity as rgb values.

  • Firstly, I am not sure if this is the right way to reach my purpose.
  • Secondly, I want to revise the code given in the example09_shadowrays from GitHub - ingowald/optix7course, since I found it quite hard to directly write all the codes myself. But I am not sure if it is the appropriate example to revise. I choose this example because it imported the .obj files with texture and it has one point light source, which I can use as my transmitter.

Any suggestions would be great and many many thanks in advance!

Best regards,
Long

Please read this thread with exactly the same discussion first and follow the links to the other threads and examples in there:
https://forums.developer.nvidia.com/t/visual-representation-of-ray-propagation-using-ray-tracing/191564

Also instead of an anyhit program for the visibility checks I mentioned inside the linked pseudo algorithm there, use this method in OptiX 7 for the fastest possible visibility ray:
https://forums.developer.nvidia.com/t/anyhit-program-as-shadow-ray-with-optix-7-2/181312/2

Thank you very much! Have a nice weekend!

Hi, I have another question about the output buffer of cuda program.
I have looked into the otixPathTracer program and I have modified a little bit. I want to use the accum_buffer to store the resulting received power and write the results in a txt file.
The accum_buffer only contains float4 type results and I don’t need it to do any visualization stuff and it is still on device memory, right? So I think I can copy them to host and check the numbers. So I code as:

float4 out_data[state.params.widthstate.params.height]; cudaMemcpy(out_data,state.params.accum_buffer,state.params.widthstate.params.heightsizeof(float4),cudaMemcpyDeviceToHost);
for ( unsigned int i=0; i < state.params.width
state.params.height ; ++i )
{
printf(“Output buffer content: %f, %f, %f \n”, out_data[i].x, out_data[i].y, out_data[i].z );
}

But afterwards, I got different results as if I print the content of accum_buffer directly in .cu program. I have saved three powers (direct path, 1 bounce, 2 bounces) using the x,y and z of accum_buffer for each launch index. But the out_data.z is not the same as I direcly print out.
Is there anything different to float4 type? I think there must be something wrong with the data read out. Can you help me with this?
Thanks in advance!

Best regards and nice weekend!
Long

Mind that the forum uses some characters for formatting text. Please use the “preformatted text” icon (Ctrl+E) in the editor’s toolbar to disable that formatting and get a non-proportional font code block area you can scroll in.

If you printed values inside the OptiX device code, you need to consider that you run thousands of threads simultaneously. Think parallel! Means each of your CUDA print instructions should have only printed one set of values per launch index.
Even then, the order in which that happens will be defined by the thread scheduling in which these individual launch indices got assigned to hardware threads.
There are also some print buffer limits, so you might not be able to print all of them.

I would absolutely not recommend printing out the contents of a whole buffer from within OptiX device programs with CUDA printf. That is not going to make anything faster than copying the buffer to the host and printing its contents there.

If at all, use printf to dump some interesting values during debugging for a single launch index.

Other than that, to be able to see if you have any error inside your device code, you would need to provide that code, not the one which works.

To your float4 question, which I think is not the issue, there are differences between the memory alignment requirements of different built-in CUDA vector types.
For example, CUDA supports float2 and float4 vectors natively, but float3 is handled as three individual floats. Means float2 data must be aligned to 8-byte memory addresses, float4 must be aligned to 16-bytes, float and float3 are 4-byte aligned. Similar for other built-in vector types.

Sorry for the bad format, I know how to copy codes in the forum now.

Regarding this topic, I have another question now.
When I try to find the reflection point in the scenario. My receiver (camera) will shoot many rays and it may not hit the reflection point, which the angle between incident ray and normal (incident angle) equals the angle between normal and transmitter (reflect angle).
Before we are using Optix, we are doing this with image receiver, which we find the image with respect to the surface and link the image receiver with transmitter and find the point. But for OptiX, we always traverse in the whole scenario, so if this surface is part of a cube, it cannot successfully find the reflection point still.

Can you help me with this? If there is other solution, that would be also great. I just want to find the reflection point between one receiver and transmitter.

Best regards,
Long

Depends on your reflection distribution function and your transmitter and receiver sizes.

If your reflections are specular (== the bi-directional reflection distribution function (BRDF) is a Dirac function), then the probability to hit an infinitely small point (== a specific reflection vector) when shooting rays randomly is zero!
That’s like a singular light which needs to be sampled explicitly for direct lighting, but that in turn is not done for purely specular surfaces because, again there exists only one path connecting along a specular reflection.

If the BRDF is not specular, then it’s possible to connect paths from receiver to surface to transmitter the way I described in the algorithm inside the links above. That is like a direct lighting (next event estimation) algorithm.

If your reflections are really specular, a totally different approach than a unidirectional Monte Carlo path tracer would need to be taken.

Before we are using Optix, we are doing this with image receiver, which we find the image with respect to the surface and link the image receiver with transmitter and find the point.

Yes, that sounds like you’re analyzing surface points to find the ones which build a pair of input, output directions which fulfill the specular reflection condition.
That’s basically like mutating a surface hit position until it fulfills the specular reflection condition. If that is not inside the bounds of the surface area, there is no path connecting in one bounce from the current input direction to the receiver (or transmitter depending on where you started your search).
Means if you convert a flat surface into a plane equation and then calculate the point where the input and output directions are exactly a specular reflection, and then check if that point is inside the limited area of the original surface (triangle), that could connect a transmitter with a receiver with one specular bounce.
If there are multiple specular bounces the whole path would need to be mutated.
That would all not require ray tracing, unless the resulting path would need to be checked for visibility against objects inside the scene.

But for OptiX, we always traverse in the whole scenario, so if this surface is part of a cube, it cannot successfully find the reflection point still.

I don’t know what your simulation scenario consists of or what you mean with “if this surface is part of a cube”.
For additional ideas you would need to add a lot more information about what your simulation scenario is exactly (scene contents, BRDF and receiver and transmitter properties).

If your simulation runs inside a specular reflecting cube, there is a totally different and very simple method to connect a transmitter with a receiver if they are infinitely small points which doesn’t need raytracing at all. I described this here: Reflection in Optix Prime - #2 by droettger