Use VRWorks 360 Video SDK version 2.1 to put together two camera feed to stereo from stream

I installed and got the samples of VRWorks to work, and the NvStitch sample.
We currently dont have the need to stitch 8 images or videoes together, but just use the image as it is.

Is there any examples how to just put together two camera feeds into one stereo?
Would properbly want to calibrate it, since we might use a fisheye lense.

Any info/experiences on live streaming would also be great.
We use two BlackMagic Studio Micro 4K cameras, that feeds its RAW video input to a BlackMagic DeckLink 4K card. Then the video input is to be processed on a GPU on the PC. This is where we want to use the SDK.

Thanks

Just to verify, you would like to take the images from the two cameras, dewarp them in the case that the lenses are fish eye and stitch them into left/right or top/bottom stereo for streaming to an HMD? The Warp 360 SDK that comes as part of the VRWorks 360 SDK can be used to convert the fish eye projection images into perspective projection. The left/right or top/bottom composition can be done in CUDA.

For the live streaming, after capturing and composing the two camera images into left/right or top/bottom do you want to encode them for streaming? The Blackmagic Design DeckLink SDK has an example called LoopThroughWithOpenGLCompositing that can be modified to handle two cameras and capture into CUDA buffers. Encoding to H.264/HEVC can be done if required with the NVIDIA Codec SDK.

Thanks for your reply.
But what about stitching together two stereo cameras? The cameras is recorded with a seperation IPD from each other, but they need to accurately aligned to give the best effect. I read that the VRWorks API can do this.
But I cant find an example of this when I only have a camera rig with two stereo cameras seperated by a set distance for stereo 3D recording. I dont need stitching within one view, like for 360 videoes. Can this be done with the VRWorks sdk? Can you provide some sample code for this?
In the sales text about the VRWorks its stated that the VRWorks uses motion detection:
“To accomplish this, NVIDIA has developed a new set of motion based algorithms for superior-quality stereo stitching, that are optimized for real time processing. Using consecutive frames, the algorithms estimate the motion of objects in a video stream, noting how they match and move across a seam while accounting for stereo disparity.” from https://developer.nvidia.com/vrworks/vrworks-360video

Where can I find examples of this, and does it work even though I only have one camera input per eye?

I would also need to calibrate the image on brightness, etc. to make them the same.

Thanks a lot for help in this matter.

With regards Kasper

I have a related, follow-up question.

I have a single .mp4 video feed from a single, fisheye camera lens. I would like to dewarp the video stream, i.e. convert the fish eye projection images into perspective projection.

Could you help point me in the proper direction? I am running the VRWorks 360 SDK.

Thanks,
S

I have a question about the fisheye effect. I am trying to stream video from 2 cameras to 2 displays in order to make an enhanced reality headset. I need to correct for the lens distortion by graduating the fisheye effect just like the htc vive. Can you point me to a code that increases and decrease the fisheye effect at the user’s will?
Fish eye 2 Fish eye 1

Hello, VRWorks 360 Video SDK latest version for Linux and Windows can not download now.
Could you mind send me one.
Our company is VR camera manufacturer, we use TX2 to stitching 360 videos for live stream.
Email: spongelinyi@gmail.com
Thanks.

Hello, VRWorks 360 SDK 2.1 for Windows version can not download now.
Could you mind send me the link address ?
Thinks.