Interactive Supercomputing with In-Situ Visualization on Tesla GPUs

Originally published at: https://developer.nvidia.com/blog/interactive-supercomputing-in-situ-visualization-tesla-gpus/

So, you just got access to the latest supercomputer with thousands of GPUs. Obviously this is going to help you a lot with accelerating your scientific calculations, but how are you going to analyze, reduce and visualize this data? Historically, you would be forced to write everything out to disk, just to later read it…

What are the drawbacks of this approach?

The simple answer is: there are no drawbacks.

The details depend a bit on what you mean by "this approach":

Enabling rendering on Tesla GPUs doesn't have any noticeable implications in terms of e.g. power consumption, GPU performance etc. And actually, on post-K20 GPUs (K40, K80, .. ) rendering is enabled by default. So no drawback there.

Context management via X server does require an extra process on your system. In most cases, this is a non-issue, but some HPC centers are hesitant to enable this for various reasons. If that's the case, we'd like to hear about this and help address it. Also, with our latest drivers we support OpenGL context management via EGL, which makes the X server largely unnecessary (see https://devblogs.nvidia.com...

And as pointed out in the article, you will need a remoting solution, to get the rendered frames off the HPC system. Again, this shouldn't be a real issue, as most HPC centers have remote visualization software setup anyway for their users. All that's needed is to enable this solution on the actual HPC system.

Finally, in-situ visualization means that some cycles are spent on visualization rather than on your simulation. The exact cost obviously depends on your use case but it is often offset by the time saved avoiding lots of disk IO or a dedicated post-processing runs.