trouble shooting with GTX 285

Hi All!

I am working on GTX 285 on Mac. The GPU card is only used for CUDA computation (use another GPU card for displaying).
Now, my CUDA application is facing with three troubles.

T1) My application is doing some operations for a given volume data. Now, it is working well with small volume data.
But, when I increase the volume dimension (twice), it returns an error called, unspecified launch failure.
In brief, my kernel is invoked from host code as following;

for(int iter=0; iter<NITER; iter++){
cudaThreadSynchronize();
invoke_cuda_kernel();
}

The CUDA error I mentioned above occurs after random number of iterations. When I checked the result before the error, everything is correct.
Can I get any ideas or advices regarding what can cause the error and how can I resolve the error?

T2) After I got the error, the application is terminated (of course). Right after that, when I tried to run my application with small volume dimension, which is working well before I got the error, it seems I never get any responds from GPU. It just stop working; even I could not terminate the application from terminal by using clrt+z. Thus, to run my application, I need to reboot my computer. Why do I have this kinds of problems? Anyone has any suggestion for this problem? Similarly, after I terminated my application using clrt+z via terminal during the iteration, I also encountered same problem? What should I do to resolve this kind of problem?

T3) I hope I can get some valuable suggestions or advices from two questions above. But, if not, I wonder there are some ways to just reset my GPU via terminal so that I do not need to restart my computer??? If there were, please advise me.

Thanks for all suggestions, comments, advises, and any replys in advance.
Best,
ss

I cannot provide any useful help , just want to note that running CUDA on any GPU other than the primary display unfortunately isn’t supported on the Mac.

Thanks for reply, tera.

I didn’t know about the fact that Mac does not support using dual GPU for different purpose (displaying and CUDA computation).

Currently, I’m using GT120 and GTX285; The first GPU is used for displaying and the other is for CUDA computation.

I’m pretty sure about this because when I look at system information, it says I connect monitor to GT 120 and there are no display connections for GTX285; furthermore, I run the simple cuda code to check which GPU machine I set for CUDA computation, it returns GTX 285; the code as following;

devID = cutGetMaxGflopsDeviceId();

cudaSetDevice(devID);

cudaGetDevice(&devID);

cudaGetDeviceProperties(&props, devID);

printf("%s", props.name);

Above codes return “GeForce GTX 285.”

Do you mean even if that, I’m still using GT120 for CUDA computation, not using GTX285???

I’m confusing.

Thanks & Best,

ss

Sorry - I did’nt mean to imply it does not work, it’s just not officially supported. Check the release notes, I think it they state that CUDA is only supported on the primary GPU. I was very disappointed to read that.

Dear tera,

I didn’t check the release notes yet; but I checked the issue (only supported on the primary GPU on Mac) with my computer.

It seems like the Mac machine automatically set its primary GPU as the GPU having better power among installed GPUs.

So, in my case, even if the display device is connected to GT120 (not the primary one), CUDA can be run with GTX285.

Also, using the code I mentioned in previous thread, I can run the CUDA in both GPUs.

So, I think the release notes for Mac should be modified as CUDA can be run in all GPUs (or at least two GPUs) installed in Mac machine.

Thanks & Best,

ss