Hi,
i’m making my first simple project in CUDA, and now i need to display the result frames of the computation in a window.
Given that the application doesn’t involve 3D at all, and that the image is complete & ready to display, is there a simpler way to show frames in a window without using DX or OGL integration?
Calling a whole 3D api looks like an overshoot to me, because i have only to show an image that is already done in GPU memory…
Well, it sort of depends on what type of framerate you’re looking at. If it’s something like 1-2 frame/sec, you could probably get the image back into your host code then use some bitmap/form drawing code (MFC or .NET/GDI+, depending on what you’re using). Otherwise, you’re probably stuck with using DirectX because at a higher framerate (even something modest, like 5 frames/sec), the form drawing code will be too slow to keep up with the rest of the program.
Also, there’s no way (that I know of, anyhow) to just copy a computed image into the video buffer memory on the graphics card, so that’s out as well.
If you don’t need to see the results in real-time (i.e. you’re computing the frames and saving them for later playback), you could just save the uncompressed images while the computation is running, then invoke an encoder (e.g. Windows Media Encoder) to compress and playback the resulting frames at a later time.
P.S. Check out the ‘box filter’ and ‘particle’ examples – they use CUDA to do some computations and OpenGL to display the results on the screen.
Well i was looking for real-time performance, so I think i will need to use DX… I was already using FreeImage to load/save “screenshots”… anyway, it looks strange to me that i can’t at all display an image without using DX/OGL:
the image is in the VRAM, already formatted to be shown… why i can’t just take full screen mode and force that to be the output?
The whole DX call + render quad + copy shader looks to me a bit useless, even if it’s not heavy at all…
also, it “scrambles” VRAM because how graphic APIs manage it.
Maybe a feature like this will be implemented in future CUDA versions?
I think that for now i will plug my results into the “box blur” code, like he did in the webcam streaming. Thanks for the reply!
Again, it’s not that you can’t display an image without DirectX or OpenGL, it’s just that drawing the image manually (even in fullscreen mode) uses methods that are not really optimized for video/animations, so they would be too slow to do what you want. DirectX and OpenGL…that’s what they’re designed for.
It’s that CUDA looks at the same level or even low-level in respect of the other APIs, and it can control a device that is born to send images to screen… so i don’t understand why the things are this way.
The GL display code is twice as long than the actual program (and it’s many times more complicated) :rolleyes:
I understand your frustration with the complexity, but if you want to render directly from GPU memory, there are no other APIs other the OpenGL or Direct3D that can do this.