Data storage in memory of the tesla V100 GPU

I need to upload a data set to the memory of a V100 testa GPU and stay there so that other applications can access it, and even more that you can add records, maintain it without deleting the data, this can be achieved, This way it would be very fast my queries to millions of information., can this be done? What I have read is that after uploading it to memory and ending the application, all the information is erased, I need it to always remain there. …regards…

It´s in CUDA

“Other applications” implies different processes, and allowing that would create a giant hole in the memory protection that is one of the corner stones of IT security. GPU memory is handled just like CPU memory: it belongs to a particular process and nobody else gets to see it. Threads inside the same process can certainly share the data.

I am not sure what you mean by “queries to millions of information”. If you are referring to a GPU-based in-memory database, that has certainly been done and there are products on offer (although all the ones I have become aware of over the past six years or so were products from startup companies, so maybe these are more in the proof-of-context category for now).

Somewhere I read that outside of niche applications, a useful in-memory database starts at 32 GB of memory, and GPUs providing that much on-board memory haven’t been available for very long.

Thank you very much, will you have any information from where I can continue looking for what I need?

Your post does not clearly describe your use case. If you are looking for a GPU-based database, simply google “GPU database” and “GPU-accelerated database”.

I explain again, from the host with an application I send an array of 100 elements to the memory of the GPU, then at any other time with another application I need to access that same area.

Currently I already do that in the RAM memory of the CPU, I only record an identifier where it tells me in which part of the memory this information is located.

Only that in the GPU when it finishes the process of uploading the arrangement, the information is deleted again, I need it to remain there.

Only that in the GPU when it finishes the process of raising the array, the information is deleted again, I need it to remain there.

Then you need a persistent CUDA enabled daemon process that talks to both applications through a software interface (API):

-The process providing the data
-and the process(es) accessing the data

In the case of an in-memory database this interface could be a SQL API.

Note that neither of these applications can have direct access to the GPU memory. The data has to be copied back to the host and passed through the API.

I see, thank you very much, I will do that.