GPU workload with large datasets

I’m looking for the GPU workload with large datasets that doesn’t fit into GPU memory. So, we need to keep datasets on storage, and the host reads data from the storage and sends it to GPU and vice versa. please let me know if you have any information.


It is not clear what exactly you are asking. Are you looking for a use case which requires more memory than can fit in the on-board memory of current GPUs (perhaps to demonstrate the utility of a new memory managing technique, compression technique, or a custom GPU board design)?

One use case I am aware of is business analytics (e.g. QlikView), which these days typically uses in-memory databases to facilitate real-time generation of results. As I recall, the minimum useful size of such a database is about 10 GB, which exceeds the storage of a single GPU of 6 GB.

Yes. Basically, those workloads which use large datasets (more than GPU memory ) and external/out-of-core implementations.