Triangle_mesh.cu

I’m trying to understand the triangle_mesh.cu code and I don’t understand the primIdx variable.
what does it represent?
who send it to the functions “mesh_intersect” and “mesh_intersect_refine”?

That is explained inside the OptiX Programming Guide chapters about intersection and bounding box programs and geometry objects, (for OptiX 6 versions also for geometry triangles).

Follow the individual function links in there to the OptiX API Reference documentation for additional information.

Advanced: Also note the functionality of the primitive index offset for Geometry (and GeometryTriangles) which can be used to hold data for multiple geometries inside one buffer.

The bounding box program is called internally by OptiX during the acceleration structure build which happens at the beginning of an rtContextLaunch*D() for all acceleration structures which are marked dirty. (In OptiX 7 the AS build is done explicitily with optixAccelBuild().)

The intersection program is called by the internal OptiX acceleration structure (BVH) traversal every time a ray hits a bounding box. It’s the most often called program domain and needs to be implemented as efficiently as possible for performance reasons.

can we take an image and transfer it to mesh.
for example creating a floor surface with an image on it.

1.) If you need that inside the old OptiX API, please search all *.h; *.cpp; *.cu files for the texture intrinsic rtTex2D inside the OptiX SDK 6.5.0.
That samples a bindless texture sampler in the old OptiX API.

Follow the first argument in that rtTex2D() call backwards inside the examples using them to see how that bindless texture object is created.

Read this chapter of the OptiX 6.5.0 Programming Guide:
https://raytracing-docs.nvidia.com/optix6/guide_6_5/index.html#host#textures

2.) If you need that for OptiX 7 based applications, do that same code search for the native CUDA texture fetch call tex2D inside any OptiX SDK 7.x installation to find the applications using textures.

In either case, if you want that to be assigned to a floor surface, you simply need to generate the necessary texture coordinates on your floor geometry and interpolate that to be used as the x, y lookup coordinates inside that rtTex2D resp. tex2D call.

3.) Now when you understood how that worked in principle, my OptiX example programs go a step further and have pretty much everything you need to load arbitrary image formats from various file formats using DevIL.

Again, when doing that with OptiX versions before OptiX 7.0.0 please download the OptiX Advanced Samples repository and compare the OptiX Introduction examples optixIntro_06 against optixIntro_07

They are identical just that optixIntro_07 added all code required to load picture file formats with the DevIL library and build OptiX texture samplers from them which are then used inside the closest hit program (resp. the cutout opacity texture inside the anyhit program.) The comments are pretty exhaustive.
One issue is that CUDA doesn’t support all formats and layouts, esp. no 3-component textures whatsoever, so a lot of work goes into converting images to the proper 4-component CUDA format.

See the resulting images here inside the README.md
You might notice that the later examples contain exactly such a floor geometry with an NVIDIA logo as texture.

4.) If you need the OptiX 7.0.0 equivalent of that, please compare the old optixIntro_07 example against the intro_runtime resp. intro_driver examples in this OptiX 7 examples repository. As always read the README.md carefully.

Note that OptiX 7 doesn’t know anything about textures! There everything is handled via native CUDA API calls which create a CUDA texture object you can fetch from inside the device programs again.
Means for all texture functionality you’d need to read the CUDA Programming Guide.
Things like these
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#texture-and-surface-memory
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#texture-functions

In all these examples the floor is generated with a tessellated “plane” geometry with texture coordinates and the material assigned to that contains a (bindless) texture object with the NVIDIA Logo texture which is sampled inside the closest hit program.

There are only two hardcoded textures for the objects in these simple examples and on device side the texture object is either null or it’s set, which is us used inside the closest hit program to decide if there is a texture to be sampled or not. That allows simple switching of the texture enable in inside the material GUI.

Simply follow this parameters.albedoID variable backwards inside the example code for the old examples:
https://github.com/nvpro-samples/optix_advanced_samples/blob/master/src/optixIntroduction/optixIntro_07/shaders/closesthit.cu#L102

Similar for the variable parameters.textureAlbedo for the OptiX 7 based port of that example.
Here in the CUDA Runtime API example:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_runtime/shaders/closesthit.cu#L231
Here in the CUDA Driver API example:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_driver/shaders/closesthit.cu#L231

Mind that the Texture.cpp which handles the CUDA texture object creation in these two examples are looking different because of the CUDA Runtime and CUDA Driver API differences. They can do the same things but the CUDA Driver API is a little more explicit.
(I generally prefer the CUDA Driver API because that allows finer control over multiple GPUs. )

Please don’t double post.