CUDA & DirectX interop issue: D3DFORMAT question

When I was playing with DirectX interop, I noticed that certain D3DFORMATs were not supported, for example, D3DFMT_A2R10G10B10. The call to cudaD3D9RegisterResource(pResource, cudaD3D9RegisterFlagsNone) simply fails with those D3DFORMATs. This is quite annoying as I had thought CUDA would treat incoming pixels as 32 bit or 64 bit memory regardless of the underlining pixel format, be it D3DFMT_A8R8G8B8 or A2R10G10B10. I was obviously wrong, and I noticed that Nvidia’s API document says that “Textures which are not of a format which is 1, 2, or 4 channels of 8, 16, or 32-bit integer or floating-point data cannot be shared.”

My current workaround is to use D3DFMT_A16B16G16R16, a 64 bit format to represent D3DFMT_A2R10G10B10. The code works, but the performance is not satisfactory. Here are my two questions:

  1. Is there any better workaround so that I can use D3DFMT_A2R10G10B10 in CUDA?
  2. Why CUDA imposes the restriction on D3DFORMATs?

Thanks!