"Context Thread Management" and chess help for include CUDA in chess

“CUDA Context Thread Management” it is a API present in the SDK of CUDA.
Now, in its description is written:
“-CUDA Context Thread Management- Simple program illustrating how to the CUDA Context Management API. CUDA contexts can be created separately and attached independently to different threads”.
I ask to you: “… and attached independently to different threads” - also to threads of a chess engine? Am I able “to include” this API in a source code of a open source chess engine ? According to you I could be tried?
I have not understood a thing : the code (CUDA Context Thread Management) present in the SDK is it only a simulation, for tested the API and its operation or is it a API true and usable?
Has someone tried it?

i’m not sure if i’m correct: thread here means cpu thread, not the massive gpu threads. And even if you have an open source multithreaded engine, you should consider how many gpu devices you can create without losing performance. that may depend on your num of SLI gpus. this is only my guess and there’s an SDK context mgr sample code.

Ok, I have understood. This application, serves for “to see” the threads used by the GPU and the in partnership performances to them. To make a will the benefits, on program, increasing or decreasing the threads performed by the GPU (or from the GPUs).

Thanks