Optix 7 Dynamic textures

Hi,
I would like to sample a dynamic openGL texture inside a closesthit program (the texture content is updated every frame). I’m able to build a texture object with cudaCreateTextureObject and sample it, but updating its cudaArray doesn’t seems to work. I’m not sure if texture objects pixels can be updated dynamically.
If I look at this cuda sample https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st, dynamic textures (surface objects) are binded to the device program using CUtexrefs. Is there a way to do something similar with Optix 7?

I was wondering if anyone knew how this can be done.

Thanks

Hi guillaume.lussier,

OptiX 7 just interacts with native CUDA textures directly, anything you can do with textures in CUDA you can do exactly the same way in OptiX 7, so I think the question here is just how to write to textures in CUDA. (Correct me if I’m wrong.)

Writing to CUDA textures within a kernel is possible, but I haven’t done it, so I Googled a bit, and this recent post has some notes about how to write to CUDA textures, and why updating the cudaArray doesn’t work: http://www.orangeowlsolutions.com/archives/1440

It might be worth mentioning that you could also use OptiX to render to a buffer object that you use to create a new OpenGL texture every frame, which I think might be easier to code and (I think) not really any slower than trying to write into an existing texture.


David.

Its severly broken:
https://stackoverflow.com/questions/58303950/writing-to-cuda-surface-from-optix-kernel

Actually if I remember correctly, how your (NVidia) hardware works, DMA engines are not used for texture copies unless you’re running the copies from a separate OpenGL context (ergo CPU thread) to any rendering (I guess if doing the copy in CUDA, I’d have to put it on a different stream), when a copy engine is not being used, then it gets carried out by a built-in compute shader which is not great as it will most likely understaturate the GPU and introduce an extra resource barrier.

In fact whenever I do a glTexSubImage from a Device Local Buffer with a bound upack buffer, I get a warning from KHR_debug about pixel transfer op being synchronised to rendering path.

We would definitely loose as much perf as we usually spend on doing a few image post-processing effects in shared memory.