Is it possible to set a limit on the number of cores to be used in Cuda programming for a given code

Assume I have Nvidia K40, and for some reason, I want my code only uses portion of the Cuda cores(i.e instead of using all 2880 only use 400 cores for examples), is it possible?is it logical to do this either? In addition, is there any way to see how many cores are being using by GPU when I run my code? In other words, can we check during execution, how many cores are being used by the code, report likes “task manger” in Windows or top in Linux?

“I want my code only uses portion of the Cuda cores”

you have a good reason for this…?

“is it possible”

amend your kernel dimensions

“is it logical”

depends on your problem - the level of occupancy at which utility/ throughput would be optimal, i suppose

“is there any way to see how many cores are being using by GPU when I run my code”

you can use the debugger (suspend execution to see which kernel blocks are currently running) and the profiler; the debugger would be more real-time/ ‘immediate’
but you seem to still cast the gpu as a cpu; noting execution may be of interest on a cpu, but less so on a gpu

Thanks :)