I’m trying to use CUDA and OpenGL to write a viewer for large (say 12k x 12k) images. I’m storing the images in video memory as an array of 16-bit indices and a palette of 32-bit RGB values. I’m then displaying (part of) this image in a display window with pan and zoom controls. I do this by processing the data into an OpenGL PBO which I then copy into an OpenGL texture and use to draw a quad into the backbuffer of my window.
I have a couple of problems. Firstly, it doesn’t seem to be very fast. For a 1600x1200 window the whole process takes around 30ms on a GeForce 8600GT. Secondly, it seems to be using 100% of the host CPU. Can anyone help?
why you use each thread only once, rather than a loop like “while(ix < nWindowSizeX && iy < nWindowSizeY)”? what’s your blockNum and threadNum?
you could use fewer bn and tn, but with “while” loop. I always scan a 1d array like below, for coalesced read. and for the very first parameter before tuning performance, bn = tn =256 is a safe start.
__global__
void scan(T* d_Dst, T* d_Src, const int nR)
{
const int gridLen = gridDim.x * blockDim.x;
T tmpR;
int offset = blockIdx.x * blockDim.x + threadIdx.x;
while(offset < nR)
{
tmpR = d_Src[offset];
//do some thing
d_Dst[offset] = tmpR;
offset = offset + gridLen;
}
}
for cpu, since pbo is alreay in gpu, there’s no cpu transfer overhead, so 100% is weird. perhaps your UI state machine is problematic. you can download a trial amd “CodeAnalyst” to check the bottleneck.
I’ve now written the same thing using plain DirectX 9.0 and a custom pixel shader and its much faster (about 10ms instead of 30ms) and has almost 0% host CPU usage. Unfortunately there are a couple of major drawbacks. Firstly, I’m limited to 8k x 8k images. Secondly, I have to use an A8L8 texture for my image data. This is a big problem because A8L8 isn’t supported as a render target in DirectX so my image has to come directly from the host and can’t be pre-processed on the GPU.
Does anyone have any ideas why my CUDA version is so slow? Is there any chance of 16k or 32k textures in a future version of the DirectX driver?
I started from the image denoising sample program. I’m still a little bit vague on exactly how threads, grids and blocks fit together. Are there any complete samples that are closer to what I should be doing?
As far as the 100% host CPU usage goes I think at least half the problem is the OpenGL interop. I think that some of those calls are blocking if the GPU is busy. I’ve tried putting a few Sleep() calls in and I can reclaim some time but with no way to find out if a function is going to block its pretty difficult to do much.
Is your image stored as GL_RGB? If so, it will need to be unpacked by the cpu to a four component format before CUDA gets ahold of it. The fast path is GL_RGBA.
have u installed newest sdk for asyncronized kernel call?
dx’s supporting for >8k*8k is not very near i think.
for the A8L8, dx have supports of this kind of char2 i think, you can check your card’s d3dfmt to workaround it. or you can pad it into char4, in which case you waste half of the memory.
Does anyone know exactly why DirectX is limited to 8k x 8k when CUDA isn’t? Is there some part of the hardware that CUDA isn’t using that imposes this limit or is it just a software/driver thing?
The only 16-bit per pixel texture that is supported under DirectX as a render target is R16F which I can’t use because of the loss of precision and the hassle of converting between float and half-float. I can’t really afford the memory for a 32-bit per pixel texture.
I don’t really understand this question. My source image is stored as a CUDA array of ushort1 for the indices plus a CUDA linear block uchar4 for the palette. The destination is a PBO which gets copied into a GL_RGBA texture using glTexSubImage2D.
As I understand it, my use of tex2D means that I am accessing texture memory space and that these accesses will be cached. Would I get better performance using global memory space?
Okay, I’ve tried using global memory instead of texture memory for my image data. I’ve tried using a loop within my kernel. I’ve tried lots of different grid/block sizes and nothing is helping. What am I doing wrong? How do I write it so that the hardware is used in as similar way as possible to that under Direct3D?
I see you’re using 2 syncthreads in your code. Can you remove one of them or both and report what happens? Since there is no write dependency in your code, everything should still stay functionally correct, but it’d be interesting to see performance differences.
I’ve just retested it and actually they don’t seem to make any difference at all. I can take out one or both and my times are the same. Typical lines from the profiler log look like this: