Yes you can launch rays from multiple cameras in parallel using OptiX. There are a couple of completely different high-level approaches to how you might do this. The summary is that you have control over how your launches parallelize, and what your launches do. For example, overlapping launches is pretty easy, and there’s also no rule that a single launch can’t render from multiple cameras.
If you simply want several different points of view of the same moment in time, where each image is rendered from a separate camera, then probably the easiest way to handle it would be to have a separate launch for each camera, and submit multiple launches asynchronously so they can parallelize. To do this, you can use CUDA streams, and put each launch on it’s own CUDA stream. Be aware that with OptiX, you’ll need to create a separate pipeline for each stream. Other than that, it’s fairly straightforward to put multiple launches in a queue and let them overlap as they execute.
Note especially that you can mix async launches with async I/O, so if you want to read to or write from the GPU for the previous or next launch while the current frame is rendering, that can often be as important to performance as overlapping the compute work.
If instead you want a single launch to render from multiple cameras, you get to decide what a “pixel” means in your raygen program. So you could have several cameras where each camera renders into it’s own sub-rectangle of your output buffer, or the images could be interleaved if you think there is some caching coherence to be gained from tracing rays from the same camera (u,v) coordinate together. I get the feeling that the first method with CUDA streams is more like what you’re asking for, but I’m happy to elaborate on this indexing alternative if it didn’t make sense.