I’m trying to figure out if this is possible. What I want to do is have several different executables creating data, which would then be processed by a single executable on the GPU (or one per GPU). I’m looking into memory structures for this. What I would like is to have page locked memory on the host (zero-copy, which I haven’t tried using yet, most likely) that can be mapped to a virtual memory space, which can then be written to by the several CPU processes, and used by the GPU process. Can this be done? Or is there another way to make page locked memory available from other processes? If anyone has any experience with this, advice would be appreciated.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Page-Locked Host Memory without using cudaHostAlloc() | 1 | 1042 | February 17, 2011 | |
| Shared page-locked memory | 0 | 697 | June 9, 2010 | |
| Sharing PagedLockMemory between Processes | 2 | 3046 | October 3, 2009 | |
| Async transfers with non-cuda host memory using page-locked memory not cuda memory | 5 | 11690 | July 4, 2008 | |
| Why, how and when to use page locked host memory | 1 | 3210 | July 8, 2009 | |
| How to pass two flags to cudaHostAlloc()? | 5 | 9283 | June 17, 2009 | |
| multiple independent CPU processes using data that is in Device Memory | 1 | 722 | May 11, 2011 | |
| mmap()ing device memory | 1 | 1269 | September 23, 2009 | |
| How to share the same Device Memory between 2 process | 12 | 7610 | October 28, 2009 | |
| Limit of page-locked host memory | 1 | 5976 | March 30, 2011 |