Hi, I’m a complete beginner concerning cuda. I wrote two basic programs that do pixel by pixel subtraction between to images (ie from a movie), one with opencv and the other with cuda, to compare their speed. However, I have some weird results concerning the one in cuda :
[attachment=6527:attachment]
To compare, with opencv, I get this :
[attachment=6526:attachment]
Here is my kernel :
#ifndef _DIFFIMAGES_KERNEL_H_
#define _DIFFIMAGES_KERNEL_H_
texture<float, 2, cudaReadModeElementType> tex1; // First frame loaded as a texture
texture<float, 2, cudaReadModeElementType> tex2; // Second one
__global__ void
diffKernel( float* g_odata, int width, int height)
{
  unsigned int x = blockIdx.x*blockDim.x + threadIdx.x;
  unsigned int y = blockIdx.y*blockDim.y + threadIdx.y;
  g_odata[y*width + x] = tex2D(tex1, x, y)-tex2D(tex2, x, y);
}
#endif
I got that error so many times. :P
It’s very common, you just have to learn how to recognize it.
Actually your image displayed the correct result, but whereas it got negative values it produces an overflow, hence causing high intensity pixels to be displayed in place of the correct result.
Hi…am trying to bring in images into cuda program…seems you have done it so well…can you provide me help in how you did your image subtraction using cuda…