Interop populated cudaArray to ID3D11Resource?

Hey folks!

Apologies if this is a simple question, but I’ve been really struggling to google around about cudaArrays and finding guides for CUDA that don’t involve hardcoding your texture beforehand!

I’m writing a middleware between two processes that render scenes. The endpoint expects an ID3D11Resource or an ID3D12Resource. As an input from the first process I have a populated cudaArray registered with the cudaGraphicsRegisterFlagsSurfaceLoadStore flag set that is filled in each frame.

Here is my code (my ID3D11Device is already initialized in a separate portion of code). This portion of code runs every frame.

const OP_TOPInput* inTOP = inputs->getInputTOP(0);
if (!inTOP) { return; }

cudaArray* cudaData = inTOP->cudaInput;

cudaChannelFormatDesc desc;
cudaExtent extent;
cudaArrayGetInfo(&desc, &extent, nullptr, cudaData);

cudaArraySparseProperties props;
cudaArrayGetSparseProperties(&props, cudaData);

m_frameInputHeight = extent.height;
m_frameInputWidth = extent.width;

if (extent.height && desc.f) {
	std::cout << "ChannelFormat: " << desc.f << "     Extent: " << extent.height << std::endl;
}

I’m just wondering the simplest path (or to know if one doesn’t exist!) between that populated cudaArray and a ID3D11Texture. Trying to simply switch cudaData to a void* type and pass it to ID311Device::CreateTexture2D() causes a seg fault.
My assumption is I need to declare a textureRefence, call cudaBindTexture() to populate that reference, and then from there pass the textureReference to the CUDA-D3D11 Interop functions until I’ve got something I can call ID3D11Device::CreateTexture2D() on??

Thanks in advance for your help.

CUDA/graphics interop doesn’t start with something (i.e. a data container) that is instantiated on the CUDA side. It starts with a resource provided by the graphics side, from which you extract a view to the underlying data (which you can then modify if you wish). The allocation does not come from CUDA, it comes from the graphics side.

If you have a graphics resource, and then extract the appropriate reference to the allocation, you would then copy your data into that reference/allocation. If the data you want to copy is starting out in a cudaArray, then you’ll need to use some form of a cudaMemcpyArrayXXXX API call (or write a kernel to write to it). But the target of that memcpy operation/kernel will be a resource provided by the graphics side.

Beyond that, have you studied any of the sample codes?

this sample code shows how to populate a D3D11 texture using data from CUDA.

Thanks so much for your help!

I’ve been studying that sample you linked for the past two days, haha. Perhaps I’m just dumb? Like I said in the OP I’ve really struggled with the samples having hardcoded textures with known vertices and whatnot, when all I’ve got access to is that cudaArray* as well as numColorBuffers, depthBits, and stencilBits.

It seems like what you’re saying is I’ll need to instantiate an ID3D11Texture2D and then feed the resource to cudaMemcpyArrayXXX(), and now my ID3D11Texture2D object will have the cudaArray’s data? Yeah, I see that now in that sample’s RunKernels() method.