Hi, I’m new around here, I have been reading the forums but I couldn’t find the answer to my question.
The team I am working right now is thinking about buying a video card for data processing and a few question were raised during the last meeting.
In the project I am working right now we have a huge amount of data to process, and we need real time feedback for this data. The processing part is really simple and most of the cuda enabled cards should be able to handle it, at least, initially. My problem is the real time feedback, it requires too many IO interruptions, which lags the whole system.
So, if we consider the following scenario:
A Video Card with 2 HDMI or DVI outputs
Is it possible to stream data through one of these ports?
And considering I’m using a video card Sli enabled, is it possible stream data through the Sli port?
Basically, I want the Video card to be performing all the calculation on the data without passing the data through the main board, receiving the data through the Sli port from a proprietary hardware, passing the feedback (data pre-processed) required through the hdmi port, and using the PCI slot only to send the data completely processed to the main program.
Yeah, I know, kind of dreaming, but theoretically it is possible, the only thing I don’t know are the limitations of the CUDA framework.
the SLI port is inaccessible and is very low bandwidth anyway. in theory you could render your output somehow and use HDMI, but that seems messy too–how are you going to read it?
The HDMI standard is very well documented and the team is composed of a few engineers, we could build a hardware dedicated to receiving the data, according to them should be easy, a little bit time consuming but should generate a lot better results.
You would need to be able to either use DMA over HDMI (which I don’t think is possible, unless you encoded some other transport protocol on top of HDMI, and that would be even more horrendous), or you’d need to have a kernel constantly running/looping and reading data in from the HDMI port then asynchronously passing it back to the host.
If the second option isn’t possible (the CUDA team would just need to add something to the driver functions to let you read the data from the HDMI port, the rest is pretty standard CUDA stuff)…what about using Firewire? I don’t think it’s common knowledge, but the FireWire spec requires that the host device (in this case, your custom equipment) be allowed DMA to the slave (e.g. your computer). I know that DMA to a video card has been tossed around as an idea here before, but never implemented because few people really needed it. Perhaps if that was implemented, you could set your equipment up to DMA the video card over FireWire, and then just launch your processing kernel continuously (in a loop) to handle the data.
This is a partial aside, but the next generation of HDMI will support 2-way gigabit Ethernet. That is an intriguing way to set up multi-GPU intercommunication without having to route everything through the host. Even better, it adds the possibility of sharing between multiple computers again without necessarily using the host.