About the number of CUDA cores in SMSP, less or gerater than warp threads number(32)

As CUDA C programming guide said, warp threads is 32, then if the number of cuda cores in one SMSP is less or greater than 32, how to schedule for warp scheduller?
(1) If the number of cuda cores in one SMSP is 16, is a warp will be devided into two 16 threads to execute?
(2) If the number of cuda cores in one SMSP is 64, will the warp scheduller dispatach two warps to execute?

Yes

Do you have an example of that? The only way it would make sense is if there were:

  • multiple warp schedulers per SMSP

    <or>
    
  • the warp scheduler can issue more than 32 threads/clk

AFAIK there is no such GPU that has an SM subdivision into two or more SMSPs with each SMSP having 64 CUDA FP32 cores, and also has either a warp scheduler with more than 32 threads/clk issue rate, or multiple warp schedulers per SMSP. So I have no answer. There is no such animal.

and also has either a warp scheduler with more than 32 threads/clk issue rate

That is hardware restriction?

CUDA C programming guide said a warp includes 32 threads, I think the word ‘warp’ is a definition from software view, but at the hardware layer, there is a warp scheduler that has a maximum threads to schedule at one clk, am I right?

Although dealing primarily with latency, you may find Greg’s reply in this thread, fills in some details.

You mentioned warp scheduler:

According to my observation, the capability of the SMSP warp scheduler in terms of threads/clk is documented in various whitepapers. I don’t know if it is a HW or SW restriction. For example, refer to the V100 whitepaper, p32 fig 5. Or another example is the GA102 whitepaper, p12 fig 3.

That is my understanding based on the two examples I have already provided. I likely won’t be able to respond to further questions about your case 2:

because as far as I know, that is an imaginary case. There is no current CUDA GPU that fits that description, therefore I have no information about it, therefore I have no further comments.

I see, many thanks to you.