Is there any method to get SP ID on which a thread runs

Hello Forum,

Since the number of SP is much smaller than the number of available threads on one SM, I’d like to figure out the relation of the threads and SPs.

It may be a solution that get SP ID like getting smid and warpid by CUDA asm.

But to the best of my knowledge, there’s no such CUDA asm code snippet to get SP ID?

Are there any other methods to get the SP ID?

I’m not aware of any method to retrieve this info.

It’s hard to imagine how it could be useful from a CUDA programming perspective. An SP in CUDA marketing speak is a floating point unit that can handle a limited number of floating point instructions. All CUDA GPUs I am familiar with (excepting cc2.1 which are now obsolete) had a multiple of 32 of these in each SM. Therefore we could imagine that warp lane 0 always uses SP 0, or SP 0 or 32 if there are 64 SPs, or SP 0 or 32 or 64 or 96 if there are 128 SPs. I cannot imagine how knowing this would be useful for CUDA programming purposes.

Suppose that there’re:

  1. Two simplest kernels that each has one CUDA block of 33 threads (two warps)
  2. one SM of 128 SPs

Since there’s one block per kernel, maybe we can refer one kernel as one block and call the two kernels Block0 and Block1, respectively.

If the two blocks (kernels)are running simultaneously, the first warp of Block0 may occupy SP[0:31] but how about the SP occupancy of the second warp of Block0?
Does the second warp of Block0 monopoly SP[32:63]? Or the second warp of Block0 only takes up SP32 and the first warp of Block1 occupies SP[33:64] and so forth

a bad idea

exactly. A warp always uses the same number of execution units per instruction, regardless of the number of active threads.

In the case of SP units, any floating point instruction, issued warp-wide, will require 32 SP units, regardless of the number of active threads.

Is it means that there’s a fixed mapping between warp lanes and SPs?

I’m quite confident there is. I tried to indicate that here:

I got it!
Thanks for your reply! It helps a lot!