I just discovered GP-GPU and I have a project in my mind:
I need to process in parallel and really fast streams of data of scientific nature.
For each stream I need to read and process chunks of 100 bytes every 10 ms. During processing I have to compute some values and keep those values in memory and used them on the next chunk of data and return the processed chunk. From time to time I will have to send events.
For each chunk I will used a different algoritm(one of 20 algoritms), or the same but in a different state.
On another situation I have to get 4 to 64 chunks of data and used them all in a single computation returning 1 stream (sum). All data processing is based on scientific data, produced by several machines.
Incoming data flow would be 500 streams * 100kb per second, aprox 50 megs coming in small chunks every 10 ms. Is the GPU able to receive and return 50 megs over a second, all data is chunked in small pieces of like 100, 150, 200, 250, 300, 350 bytes ??
Data processing involves fft, sum, multiplications. I’m thinking of a project pilot on GeForce GTS 250. Based on my understanding I should be able to make the project run. There is any reason for not doing it?