I’m trying to figure out if this is possible. What I want to do is have several different executables creating data, which would then be processed by a single executable on the GPU (or one per GPU). I’m looking into memory structures for this. What I would like is to have page locked memory on the host (zero-copy, which I haven’t tried using yet, most likely) that can be mapped to a virtual memory space, which can then be written to by the several CPU processes, and used by the GPU process. Can this be done? Or is there another way to make page locked memory available from other processes? If anyone has any experience with this, advice would be appreciated.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Shared page-locked memory | 0 | 671 | June 9, 2010 | |
Accessing a memory block created in another program. | 2 | 896 | September 7, 2018 | |
Memory Copy Threads | 2 | 1992 | July 27, 2007 | |
mmap()ing device memory | 1 | 1220 | September 23, 2009 | |
multiple independent CPU processes using data that is in Device Memory | 1 | 692 | May 11, 2011 | |
Limit of page-locked host memory | 1 | 5949 | March 30, 2011 | |
Can i use a single memory lock for both devices ? | 0 | 487 | July 12, 2019 | |
Why, how and when to use page locked host memory | 1 | 3115 | July 8, 2009 | |
about managed memory | 1 | 1775 | October 9, 2017 | |
Creating shared memory segments on the GPU | 2 | 983 | December 22, 2012 |