Virtual camera via rig.json

Hi @SivaRamaKrishnaNV

Let’s say we use the object detection pipeline example. Pipeline example only suports 1 camera as of now. We have say 4 cameras 2 in group A and 2 in group C. We wish to stitch the images and then give as a single image. I am referring to 6.0.8 nvsipl_camera fsync option for syncing cameras between the two groups and use camera_extra as a reference. Meanwhile we have some video samples how it should look like when stitched. Hence we could keep evaluating the performance of live camera feed vs recorded video. This was the intention. Please let me know if you foresee some challenges.

Would the rig file look something like below:
“sensors”: [
{
“name”: “camera:sample0”,
“nominalSensor2Rig_FLU”: {
“roll-pitch-yaw”: [
0.0,
0.0,
0.0
],
“t”: [
1.8621,
-0.1939,
1.3165
]
},
“parameter”: “file=/path/to/file”,
“properties”: {
“Model”: “ftheta”,
“bw-poly”: “0.000000000000000 5.35356812179089e-4 4.99266072928606e-10 4.27370422037554e-12 -6.68245573791717e-16”,
“cx”: “1927.764404”,
“cy”: “1096.686646”,
“height”: “2168”,
“width”: “3848”
},
“protocol”: “camera.virtual”
}

Thanks!!
Jishnu