I have a little problem. Let me explain briefly and sorry for my english:
I have an app (APP2) developed in CUDA that uses some data (stored in an array in global memory).
This data is from an external hardware and I’m able to copy it in GPU memory using other app (APP1). Both programs works fine separately.
But the question is how tell the other program where the data is located, there is a way to do this? There exist some function that allow me to read from an specific address of GPU memory? Because the data is refreshing every time and I have to manipulate this and show it at the screen. So the rule maybe will be:
APP1: Capture data and copy to device memory (forever loop)
APP2: Get data from the same location in device memory and make things…
How can I copy and get from the same location of memory? Do you know how to achieve this? Using one thread for each app and sharing a global variable?
I have no idea and I’m getting crazy :-P
with the cuda runtime, there is not possible : an allocated device memory is attach to a threads. You may make this with pinned memory (is CPU memory). APP1 write to pinned memory (CPU memcpy) , APP2 use a pointer mapped to this pinned memory.
Thank you for your fast answer. So, what you mean is that I have to use CPU memory for the data and then transfer it to the device each time? And how is the way to make two APPs run concurrently? Have a main program that launch two CPU threads for each APP? Sorry if there is a question a bit silly :-S
Could you clarify me this with some example please?