execution ID

Hi everyone,

I’m quite new with the CUDA programming and I have a question. Is it possible to have an identifier of the current thread running on the device. What would be best for me, is that this identifier would be unique across blocks (but not needed across devices) and would be between 0 and the maximum number of threads that could be run on the device.

The reason for this is the following:
I’m trying to transform a sequential algorithm on CUDA. In this algorithm, I need to update counters. What I would like to do, is that each thread have its own counter. At the end, I would just need to compute the sum of the counters and everything would be fine. It means that if I have k counters, I would create a matrix containing k*nbthreads counters. The only problem is that I need a lot of counters (~60000) and the number of threads can be quite important also (much more than the maximum of threads allowed on the device). This is why I was thinking that if I could make a mapping between each thread and an identifier between 0 and the maximum number of threads, my matrix would be much smaller and could fit on the memory of the device.

Do you know how I can have such identifier? Or do you have any hint that would avoid using such identifier?

Thanks a lot,

Benoit

Hello,

In each block each thread has a unique identifier threadIdx which has 3 coordinates .x .y .z. Each block has a unique identifier blockIdx with also 3 coordinates .x .y .z.

If you submit a kernel with the following <<<<Nblocks,tpbl>>> then in the kernel you obtaine and unique number:

int idx=threadIx.x + blockIdx.x*blockDim.x;

where blockDim.x is the size of threads per block (tpbl).

Thanks for the answer. Unfortunately, this solution doesn’t match my needs. idx might vary between from 0 to nbblock*blocksize which will be a bigger range than 0 to the maximum number of threads that can be run concurrently on a device.

What I would like is more something like the identifier of the ALU running the thread. That way, I won’t waste too much memory, having only one copy of my counters for each ALU. Do you know something like that?

Benoît

Hello,

Why not just make the counter to reside in the registers (or local memory) or shared memory?

60,000 counter values will fit on the device easily.

The typical way to sum one counter per thread is to perform a first pass reduction in each block and write out n_blocks totals. Then run a second kernel to complete the final sum reduction.

Thanks for the answers!

That’s a way I could look up, thanks. But will 1.2mB fit into the memory of each thread?

If there was only 60k counters it wouldn’t be a problem, I agree. The problem is that if I don’t try to make any optimization, I would need 60k counters for each of my 10 millions threads. This means a lot of memory…

Benoit

The PTX manual (http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/ptx_isa_2.3.pdf) defines the following special registers:

%laneid  - threadid in warp     (0-31)

%warpid  - unique ID per SM     (0-%nwarpid, Fermi => 0-47)

%nwarpid - maximum warps per SM (          , Fermi = 48)

%smid    - unique ID per Device (0-%nsmid,   Fermi => 0-15)

On GF100 you can have at most 48 warps/SM and 16 SMs.

unique_warpid = (%smid * %nwarpid) + %warpid

These special registers can be queried using inline PTX (see http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/Using_Inline_PTX_Assembly_In_CUDA.pdf)

I wasn’t able to test this function but for SM2.x I think it would be

__device__ __inline__ unsigned int __unique_threadid()

{

    unsigned int landid;

    unsigned int warpid;

    unsigned int nwarpid;

    unsigned int smid;

asm volatile("mov.u32 %0, %%laneid;"  : "=r"(laneid));

    asm volatile("mov.u32 %0, %%warpid;"  : "=r"(warpid));

    asm volatile("mov.u32 %0, %%nwarpid;" : "=r"(nwarpid));

    asm volatile("mov.u32 %0, %%smid;"    : "=r"(smid));

return (smid * nwarpid * 32) + (warpid * 32) + laneid;

}

On GF100 the maximum range would be 0 - 24575.

Just a suggestion, try dividing your work in 10 parts for example. This way you divide your need for memory.

Ah, I see now. Your original post was not clear that there were to be 60k counters per thread. That is indeed far too much for either registers or shared memory.

Does every thread contribute to all 60k counters? Or does each hit only a few scattered counters? For the scattered writes, you might actually get decent performance by storing only one instance of counters in device memory and then using atomicAdd in the threads. Atomics are fast in Fermi and even faster in Kepler. If you have too many collisions for that to be a viable solution, you could store one set of counters per block in shared memory - threads in that block would use shared memory atomics to update the counters. Unfortunately, 60k counters will not fit in 48k of shared memory, so you will need to run multiple passes to collect all the results.

Thanks again for every responses!

This is doing exactly what I wanted. I just needed to specify -arch compute_20 to nvcc and it worked without any problem. On my card (Quadro 1000M) the range is between 0 - 3071. (Note: 3071 = (nb of multiprocessors * max threads per multiprocessor) - 1 )

That is a great suggestion. I’m planning to divide the work, but I wanted a “simple” version to start. It will be easier to explain to the rest of the team. And once the team will validate my work, I will start every optimization like this one, and also the ones defined in the Best practice guide.

Sorry if my explanation wasn’t clear enough. It’s not easy to be very clear :-)

Not each thread will contribute to every counters. I had never heard about atomicAdd before. Do you know how does it work internally? Is there some kind of cuda mutex? I will make some measurments and then I will implement both versions and see which is faster. I will look up for this function, thanks for the tip!

Benoit