D3D interop RELOADED isn't supposed to be better than OpenGL...?

I’ve read posts from lot of people that have asked similar stuff before. <img src=‘http://hqnveipbwb20/public/style_emoticons/<#EMO_DIR#>/crying.gif’ class=‘bbc_emoticon’ alt=‘:’(’ /> But I haven’t found a real good explanation about this.

( and yes… used the search button lot of times…)

I’ll really appreciate any useful explanation or link.

[codebox]void HiddenRender()

{

  // RENDER MODEL GEOMETRY

  g_renderToSurface1->BeginScene( targetTexture , &g_viewport1 );

… lights, matrices, geometry …

g_renderToSurface1->EndScene( D3DTEXF_LINEAR );

if ( g_applyCuda )

  {

       res = cudaD3D9MapResources(1, (IDirect3DResource9**)& targetTexture );

// MY OWN BIZARRE MACHINE-VISION STUFF

   // RunKernels();

res = cudaD3D9UnmapResources( 1 , (IDirect3DResource9**)& targetTexture );

  }

}[/codebox]

ahhh!!, i forget something:

g_pd3dDevice->CreateTexture( Width , Height , 1 , D3DUSAGE_RENDERTARGET , D3DFMT_X8R8G8B8 , D3DPOOL_DEFAULT , & targetTexture , NULL )

Now, what annoys me:

(on the same machine, Cuda 2.1, Vs2008 express edition, core duo + 8500GT…)

ONLY RENDERING -->> 8000 fps

ONLY MAPPING - UNMAPPING -->> 700 fps

I run the ‘SobelFilter’ example (opengl…) -->> 14000 fps

I run ‘simpleD3D9Texture’ example , JUST MAPPING-UNMAPPING, similar texture sizes than me —>> about 700 fps

Is D3D INTEROP not supposed to be faster than OpenGl at the moment … ? (read a post about that…)

Why all that time to map a texture that is already supposed to be at the device?

Furthermore, why do i need to map that texture every cycle? ( i alway work on the same one…, it should be enough with ‘locking’ it…)

Is there other way to access the rendered surface/texture without that penalty…?

( so I can success…, receive congratulations from my boss, get fame and women… External Image

Thank you guys for your time.

Direct3D interoperability requires the driver to convert from the special texture format to linear format by doing a copy behind the scenes, so it has many of the same issues as OpenGL interop.

thanks for the explanation. Master Green.

I imagined something like that. But the existence of ‘tex1D(…)’ ‘tex2D(…)’ functions made me think that Cuda was able to access directly those textures.

Anyway. I`m sure that the gurus at Nvidia will make some improvements with that in future versions.

As a developer of machine vision systems, I have great expectations and great plans about Cuda :ph34r: