The usual way to request extensions to CUDA (e.g. new functions/APIs) is to file a bug. You can provide a link in the bug to this forum post if you think it is important. The better justification you can give, the more likely it is for the request to receive some priority. Things that can already be done using an alternate method may receive lower priority.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Determine Memory CUDA Context Memory Usage | 16 | 10808 | March 9, 2019 | |
CUDA setup times (create context, malloc, destroy context) some measurements included | 19 | 23185 | July 8, 2011 | |
Device Memory Mangement | 14 | 3474 | December 5, 2008 | |
well how do I know if cuda runs on the gpu | 20 | 13560 | July 9, 2008 | |
Cuda runtime call after driver api call, excessive overhead | 17 | 2006 | December 24, 2021 | |
Contexts and cudaMallocHost Same rules? | 17 | 11249 | November 15, 2008 | |
Memory on the Nvidia device between kernel calls tends to retain state | 26 | 14442 | June 21, 2009 | |
cudaMalloc performance issue after p2p access is enabled | 10 | 384 | June 5, 2024 | |
CUDA 4.0 Context Sharing by Threads Impact on existing Multi-threaded Apps | 8 | 22924 | March 9, 2011 | |
How to make host pinned shared memory across process fork(2)? | 14 | 5300 | January 6, 2015 |