Real-time computing with CUDA

I’m developing a software with Visual Studio for acquiring and analyzing data coming from a multichannel device
I would like to ask the following in order to check if a NVIDIA product can offer me a solution.
I state that I know virtually nothing about this system.

During real-time data acquisition the software has to acquire data from an input board and, contemporarly, to store and to visualize them.
In order to perform visualization, the software should (1) take data coming from all input channels during a certain time window,
(2) process the data in order to obtain a value for each channel that can be displayed (namely a value that can be expressed in a color to display),
(3) display data through a custom image object, (4) repeat the operation keeping a constant period between displayed
values and thus generating a video by refreshing the produced images.

A Visual Studio developed software for Windows platform can not perform well as the OS is not real-time.
The thread scheduling produces different computing time for the operation (2) and (3) and thus the output video is not smooth.

My questio is: is there a solution using a NVIDIA product and by programming it?
If yes, how can I choose a product whose performance fit with my needs and time constraints?
Moreover, is there a way to programming the GPU inside the Visual Studio framework and, if not,
which is the best way to programming it and to interface with my software?

Please, if there is a solution, give me information on documentation where I can get started
on resolving my issue.

I’m working on a similar project:

A. Camera data captured at 2KHz

B0. Process the data

B1. Process the data more

B2. Process the data more more

C. Use the data

D. Repeat until stopped

I pipeline the processing with B0, B1, B2 processed by different systems or cores or CPU’s

The rate is determined by the Camera, 2 KHz, thus no stage is allowed to take longer than 500 micro seconds.

However, the delay from capturing the data at A to using it at C could be 1500 micro seconds

If your sample rate and video update rate are different, use double or tripple buffering to smooth the video.

This is a hard real time system, so “can’t take longer than X” means it can’t, period.

We use Wind River Linux with a hard real time kernel.

Thanks for your kind reply and information.

We already use a circular buffer for acquiring data and our purpose is to reach 10-12 fps video.

(our camera samples at 8KHz producing a data rate of about 60MB/s)

I would like to know if it is possible to obtain a smoothed video using CUDA and a windows platform,

as it would be lower time-cost than migrating to a real-time OS.

Yes, but mreinig suggested that you use doublebuffering for the paint operation so that you would not be overwriting the image you are watching.

ok, I understand. thanks guys! :)