Launch Timeouts

Can you break your kernel up into smaller chunks and call them separately? That would be my first step (also, it’ll help you keep your kernel code maintainable). Or, can you partition your data set in some way and call the kernel on the partitions to get some intermediate results, then (perhaps) a final kernel call to get the ‘true’ results?

I would strongly advise taking that route instead of trying to deal with the watchdog timer (unless, like MisterAnderson said, you’re on Linux and you don’t mind disabling the X-windows server).

Can you break your kernel up into smaller chunks and call them separately? That would be my first step (also, it’ll help you keep your kernel code maintainable). Or, can you partition your data set in some way and call the kernel on the partitions to get some intermediate results, then (perhaps) a final kernel call to get the ‘true’ results?

I would strongly advise taking that route instead of trying to deal with the watchdog timer (unless, like MisterAnderson said, you’re on Linux and you don’t mind disabling the X-windows server).

Indeed. If you are developing any application with a non-remote GUI, you most certainly do not want the UI to completely freeze for 5 seconds each time a kernel is called - breaking up the computation into small kernel calls is the only option in that case.

Indeed. If you are developing any application with a non-remote GUI, you most certainly do not want the UI to completely freeze for 5 seconds each time a kernel is called - breaking up the computation into small kernel calls is the only option in that case.

I also have the kernel timeout problem.

My PC is running a managed Linux, Centos.

Does anyone know how to prevent X-windows from starting without root access?

Many thanks

Bill

I also have the kernel timeout problem.

My PC is running a managed Linux, Centos.

Does anyone know how to prevent X-windows from starting without root access?

Many thanks

Bill

there are some older Geforce models by nVidia that run in a PCI slot (not PCI-e). E.g. passively cooled Geforce 8200. I have one such card running as primary display card, and an nVidia GT 240 in the PCI-e slot for CUDA development.

there are some older Geforce models by nVidia that run in a PCI slot (not PCI-e). E.g. passively cooled Geforce 8200. I have one such card running as primary display card, and an nVidia GT 240 in the PCI-e slot for CUDA development.

Umm, that’s a tough one. If you have physical access to the machine, reboot it and press the key for “interactive startup”, tell it to start all services except xdm or gdm. If this is a remote access machine, send a note to the administrator and ask “why in the world is this terminal server running X-windows on the compute GPU?!”

Umm, that’s a tough one. If you have physical access to the machine, reboot it and press the key for “interactive startup”, tell it to start all services except xdm or gdm. If this is a remote access machine, send a note to the administrator and ask “why in the world is this terminal server running X-windows on the compute GPU?!”

If you have physical access, it is also normally possible to shutdown X11 from the xdm/gdm/kdm greeter screen without a root password.

If you have physical access, it is also normally possible to shutdown X11 from the xdm/gdm/kdm greeter screen without a root password.

if a similar problem, i wrote my code locally on my windows machine with dedicated CUDA graphics card and it works. but for my project i need to make it run remote on a linux machine, where i only have terminal access, but i also get the timeout problem there…