Hi,
I have a static memory space which is used for receiving data. This data is processed by the GPU afterwards. Is there a way to cudaAlloc this exact memory space or is the only way to copy the data to previous allocated GPU memory?
simpson3
1
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Allocate CUDA memory on defined address | 4 | 412 | October 18, 2021 | |
a question about cudaMallocManaged() | 4 | 525 | November 17, 2018 | |
Mapping CUDA memory to a GL buffer object | 1 | 1004 | May 13, 2016 | |
How to allocate contiguous GPU memory and get its physical address in latest CUDA? | 6 | 1488 | September 19, 2023 | |
cudaMalloc and sharing between CPU threads | 0 | 4344 | May 20, 2009 | |
Page-Locked Host Memory without using cudaHostAlloc() | 1 | 1012 | February 17, 2011 | |
Host Memory mapping to GPU | 3 | 5937 | February 3, 2012 | |
Does cudaMallocHost or cudaHostAlloc APIs allocate contiguous physical memory | 0 | 527 | September 14, 2018 | |
Question about CUDA memory allocation | 2 | 7905 | May 15, 2011 | |
cudaMallocManaged do not allocate on shared VRAM, but on dedicated VRAM | 2 | 493 | October 12, 2021 |