kernel launch failure in Matlab

Hi Forum,

I’m very new to GPU programming, and I’m trying to write a simple CUDA program to speed up calculations of the digamma function for large 2D matrices. I’ve adapted a C implementation of the digamma function (from the “lightspeed” Matlab toolbox) to run on the GPU. I then compile the .cu code to a .ptx file and instantiate the kernel in Matlab by calling

k = parallel.gpu.CUDAKernel('digamma_test.ptx','digamma_test.cu');

and then use

o = feval(k,matrix);

The function works great, providing huge speed-ups on large matrices. The problem is that after a bunch of feval calls I get an error when I try to instantiate the kernel or call feval: “CUDA_ERROR_LAUNCH_FAILED”, with no more information than that. If I restart Matlab, I can get it to work again for about the same period of time.

My thoughts are that somehow I’m not managing my memory very well. I’ve never worked in a language where memory has to be explicitly managed like CUDA or C, so I’m sure I’m leaving something out. The problem is, I just don’t know what.

Here’s my kernel:

#include <math.h>

#include <stdlib.h>

#include <float.h>

/* this is directly from Minka's lightspeed toolbox*/

__global__ void digamma(double *y) 

{

    int idx_x = blockIdx.x;

    int idx_y = blockIdx.y;

    int offset = idx_x + idx_y * gridDim.x;

    double x = y[offset];

    double neginf = -INFINITY;

    const double c = 12,

    digamma1 = -0.57721566490153286,

    trigamma1 = 1.6449340668482264365, /* pi^2/6 */

    s = 1e-6,

    s3 = 1./12,

    s4 = 1./120,

    s5 = 1./252,

    s6 = 1./240,

    s7 = 1./132,

    s8 = 691./32760,

    s9 = 1./12,

    s10 = 3617./8160;

  double result;

  /* Illegal arguments */

  if((x == neginf) || isnan(x)) {

    y[offset] = NAN;

    return;

  }

  /* Singularities */

  if((x <= 0) && (floor(x) == x)) {

    y[offset] = neginf;

    return;

  }

  /* Negative values */

  /* Use the reflection formula (Jeffrey 11.1.6):

   * digamma(-x) = digamma(x+1) + pi*cot(pi*x)

   *

   * This is related to the identity

   * digamma(-x) = digamma(x+1) - digamma(z) + digamma(1-z)

   * where z is the fractional part of x

   * For example:

   * digamma(-3.1) = 1/3.1 + 1/2.1 + 1/1.1 + 1/0.1 + digamma(1-0.1)

   *               = digamma(4.1) - digamma(0.1) + digamma(1-0.1)

   * Then we use

   * digamma(1-z) - digamma(z) = pi*cot(pi*z)

   *

if(x < 0) {

    *p = digamma(p,1-x) + M_PI/tan(-M_PI*x);

    return;

  }

  */

  /* Use Taylor series if argument <= S */

  if(x <= s) {

      y[offset] = digamma1 - 1/x + trigamma1*x;

      return;

  }

  /* Reduce to digamma(X + N) where (X + N) >= C */

  result = 0;

  while(x < c) {

    result -= 1/x;

    x++;

  }

  /* Use de Moivre's expansion if argument >= C */

  /* This expansion can be computed in Maple via asympt(Psi(x),x) */

  if(x >= c) {

    double r = 1/x, t;

    result += log(x) - 0.5*r;

    r *= r;

#if 0

    result -= r * (s3 - r * (s4 - r * (s5 - r * (s6 - r * s7))));

#else

    /* this version for lame compilers */

    t = (s5 - r * (s6 - r * s7));

    result -= r * (s3 - r * (s4 - r * t));

#endif

  }

  /* assign the result to the pointer*/

  y[offset] = result;

  return;

}

The error you are seeing comes from the Parallel Computing Toolbox™ from MathWorks(r) that you using. Other people have reported similar broken aspects of PCT™, so you are not alone. You might try asking MathWorks people what is wrong…

Or you might try using something better :)

Good luck.