Using float3 instead of VertexAttributes in optixIntro

I want to use float3 type instead of VertexAttributes for vertices in optixIntro example projects but cannot get it to work.

Here is the piece of the code to create the attributesBuffer in createGeometry method where I only changed
VertexAttributes to float3

geometry = m_context->createGeometry();

	optix::Buffer attributesBuffer = m_context->createBuffer(RT_BUFFER_INPUT, RT_FORMAT_FLOAT3); 

	void *dst = attributesBuffer->map(0, RT_BUFFER_MAP_WRITE_DISCARD);
	memcpy(dst,, sizeof(optix::float3) * attributes.size());

I am not sure about Acceleration properties. I changed the size 48 to 12

            acceleration->setProperty("vertex_buffer_name", "attributesBuffer"); 
	MY_ASSERT(sizeof(optix::float3) == 12);//48
	acceleration->setProperty("vertex_buffer_stride", "12");//48

	acceleration->setProperty("index_buffer_name", "indicesBuffer");
	MY_ASSERT(sizeof(optix::uint3) == 12);
	acceleration->setProperty("index_buffer_stride", "12");

I also changed the attributedBuffer declaration in the cuda file to be of float3:

rtBufferoptix::float3 attributesBuffer;

However I get below error:

“Type mismatch (Details: Function “_rtContextValidate” caught exception: Variable “attributesBuffer” assigned type Buffer(1d, 12 byte element). Should be Buffer(1d, 48 byte element).)”

Any idea?

You seem to have not changed all occurrences of that buffer inside the device code.

That is definitely to be expected if you only change host code. The device code also needs to be adjusted in all places, and this is going to be more involved as described below.

First, I would really recommend to not use OptiX versions before 7.0.0 for new projects anymore.

I have ported the later versions of the OptiX 5.1.0 based OptiX Introduction Samples
to OptiX 7.0.0 (resp. 7.1.0 ) here
for exactly that reason.

Note that the OptiX 5.1.0 based OptiX Advanced Samples are not using built-in triangle geometry added first in OptiX 6.0.0 and therefore are not making use of the RTX hardware triangle intersection! The old examples are still using custom triangle primitives via the explicit boundingbox and intersection programs. That shouldn’t be done anymore.

The vertex attributes are handled the same way in both repositories. If you want to change that I would do the following:

  1. Copy and rename the whole sub-directory of the one example you want to change.
  2. Add the new sub-directory to the CMakeLists.txt.
  3. Use “Replace in Files” to rename all occurrences of the original project name in all *.h;*.cpp;*.txt files.
    (Do not match with “whole word”.The project name is used as prefix for the PTX destination folder.)
    That will change the project name and the relative destination folder name where *.ptx files are built and searched.
  4. Build the new project and check if everything is working under the new name.

Now to change the VertexAttributes structure with the interleaved vertex attributes to just float3 vertex position data I would do the following:

  1. Comment out the tangent, normal, and texcoord fields inside the VertexAttributes structure, leaving only the float3 vertex. Keep using the VertexAttributes struct for now.
    That will completely break all code which accesses these fields! Means the lighting will break because you have no normals, the texturing will break because there are no texture coordinates, the anisotropic materials (GGX) will break because there are no tangents.
  2. Build the new project and step through the compile errors and comment out all code which fails because it accesses the removed VertexAttributes fields.
    Also since there won’t be per vertex normals anymore, you would either need to generate some at runtime, like the triangle face normal (the geometric normal), or use a material implementation which only displays the current material’s albedo without lighting.
  3. Repeat 2. until the project builds and runs.
  4. If the project builds and runs again with only the float3 vertex inside the VertexAttributes, now would be the time to replace all VertexAttributes occurrences inside *.h;*.cpp;*.cu files against float3. This will then require final adjustments for the remaining structure accesses when calculating the vertex attributes per hit.
    (That includes all runtime generated geometry (plane, box, sphere, torus) and in the OptiX 7 examples the and again.)

That should do it.

Thanks for the quick reply. I will try your solution on the optix7 samples and will get back.

I installed Optix 7 and the samples on the link you provided. Thanks.
Many things are changed in Optix 7 and I would say this sbt has added complexity for understanding the api and how it works.

On the intro_driver project, I really struggle understanding how m_d_systemParameter and m_systemParameter
are supposed to work.

The comments say

m_systemParameter; // Host side of the system parameters, changed by the GUI directly.
m_d_systemParameter; // Device side CUdeviceptr of the system parameters.

However, cuMemAlloc and cuMemcpyHtoD which are to allocate/copy memory on the device is used across the code for m_systemParameter

Also there are three different outputBuffer s, m_systemParameter, m_d_systemParameter and outputBuffer fields.

Can you clarify what are the differences?

The different variable names indicate the two different memory spaces you’re working with in CUDA applications.
The variables with d_ prefix in my code indicate device memory, the ones without are in host memory.

OptiX kernels only work with device memory, that is addresses which can be accessed by the GPU (could also be pinned host memory, but I digress).

Means all input data is normally build up inside host memory first, and when time has come, the necessary data is copied to the resp. device memory location with a CUDA cuMemcpy*() call where the name “HtoD” implies Host to Device, and vice versa.
So inside the OptiX device code you normally handle with CUdeviceptr allocated with cuMemAlloc() which are 64-bit device addresses of the underlying devive memory, think buffers or arrays of data.

The m_d_systemParameter holds the single global constant memory which is given to OptiX as launch parameters.
Think of it as a structure with all variables declared at context scope in previous OptiX versions.
Look at the optixLaunch() call inside the examples This is the same in all examples, just the name changes.

The name of that constant struct is given to the Pipeline in the PipelineCompileOptions via the pipelineLaunchParamsVariableName. In my code that is normally “sysParameter” or “sysData” because these are renderer system wide, means they can be accessed anywhere in the OptiX device code, aka. what was done with context scope variables in the past.

Some care needs to be taken with the CUDA vector type alignment restrictions.

Please read though the OptiX 7 Programming Guide and API Reference:

Understanding the Shader Binding Table layout and the indexing into that is crucial.
Some more information about that:

For additional OptiX 7 examples from beginner to advanced level please have a look into the resources linked in the following sticky posts at the top of the OptiX sub-forum:

Sorry just to understand it for myself,

So m_systemParameter is in host memoy, is that right?

And cuMemAlloc is used for allocating device memory ?

Then why, for example in initPipeline method, m_systemParameter is used with cuMemAlloc?

CU_CHECK( cuMemAlloc(reinterpret_cast<CUdeviceptr*>(&m_systemParameter.lightDefinitions),
sizeof(LightDefinition) * m_lightDefinitions.size()) );

CU_CHECK( cuMemAlloc(reinterpret_cast<CUdeviceptr*>(&m_systemParameter.materialParameters),
sizeof(MaterialParameter) * m_guiMaterialParameters.size()) );

m_systemParameter.lightDefinitions is effectively a CUdeviceptr itself. That’s the buffer inside the system parameters which holds an array of LightDefinition structures.

Same for m_systemParameter.materialParameters, that’s also a CUdeviceptr which holds all material information (BxDF closure index and parameters.)

So yes, m_systemParameters is in host memory and also contains CUdeviceptr which point to device memory.
The memory behind these pointers is only accessed in device code, or with cuMemcpy() on the host.

The benefit of that is, that you can have variable sized buffers without changing that launch parameters struct itself (which also wouldn’t be feasible because constant memory has a rather small maximum size limit).
Looking at the rtigo3 example: The size of the these arrays is tracked in the launch parameter variables further down (numLights, numMaterials). Similar for the 2D outputBuffer which dimension is tracked in int2 resolution. Think of rtBuffers at the context scope in previous OptiX versions.

The m_lightDefinitions array stage that LightDefinitions data on the host.

For the material it’s indirectly staged via the m_guiMaterialParameters because the values changed in the GUI only drive the final device MaterialParameter contents. Absorption or texture enables are derived from the GUI parameters.

Ok, I haven’t used the d_ prefixes consistently for the members of those structure because that is mostly device code.
I’m currently fixing some things in the rtigo3 and nvlink_shared examples anyway and can adjust that.

Maybe it’s clearer when looking at the cases where the members are actually CUdeviceptr like the outputBuffer:

Thanks for the your complete answer. That cleared up things.