How many threads can reside in a CUDA core at the same time?

I am getting some conflicting answers to this question.

In the programming guide (V8.0), it is said that “…all threads of a block are expected to reside on the same processor core and must share the limited memory resources of that core”, so it seems that a core can have more than one thread residing on it. However, in this following post

http://yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-gpus.html

it is said that “…an NVIDIA GPU contains several largely independent processors called ‘Streaming Multiprocessors’ (SMs), each SM hosts several ‘cores’, and each ‘core’ runs a thread”.

I wonder which of the two is correct? Or maybe they are both correct, but there is something wrong with my understanding?

Thank you!

They are both correct. The first statement is using processor core to refer to SM, so the two statements are consistent. All threads of a block reside on the same SM, and they share the resources of that SM.

That clears the air. Thanks a lot!