Virtual camera via rig.json

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.3.10904
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hi

I was wondering if its possible to configure the rig.json to stream via camera.virtual ie replay a video? if so can you please share an example of how parameters would look like? I am using camera_extra as an example.

The reason of asking is in a larger scope of things when i want to sync say group A and group C cameras together to stitch together as one and send to a object detection and tracker pipeline as given in the examples, I would like to switch between actual camera and pre recorded video to evaluate performance .

If the above is not possible would your best suggestion be to expand the available object detection example to read from rig.json for multi-camera since it has only support for 1 camera.

Thanks!!
Jishnu

Dear @jishnuw,
Yes, it is possible to configure rig.json with two virtual cameras. I would like to know if the camera data is recorded simultaneously for synchronisation of frames across two cameras?
You can protocal as camera.virtual and use file=/path/to/file for each camera.

Hi @SivaRamaKrishnaNV

Let’s say we use the object detection pipeline example. Pipeline example only suports 1 camera as of now. We have say 4 cameras 2 in group A and 2 in group C. We wish to stitch the images and then give as a single image. I am referring to 6.0.8 nvsipl_camera fsync option for syncing cameras between the two groups and use camera_extra as a reference. Meanwhile we have some video samples how it should look like when stitched. Hence we could keep evaluating the performance of live camera feed vs recorded video. This was the intention. Please let me know if you foresee some challenges.

Would the rig file look something like below:
“sensors”: [
{
“name”: “camera:sample0”,
“nominalSensor2Rig_FLU”: {
“roll-pitch-yaw”: [
0.0,
0.0,
0.0
],
“t”: [
1.8621,
-0.1939,
1.3165
]
},
“parameter”: “file=/path/to/file”,
“properties”: {
“Model”: “ftheta”,
“bw-poly”: “0.000000000000000 5.35356812179089e-4 4.99266072928606e-10 4.27370422037554e-12 -6.68245573791717e-16”,
“cx”: “1927.764404”,
“cy”: “1096.686646”,
“height”: “2168”,
“width”: “3848”
},
“protocol”: “camera.virtual”
}

Thanks!!
Jishnu

Dear @jishnuw,
Yes, the rig file expect to work and read data from video file.

Hi @SivaRamaKrishnaNV

One more question, is there a limit to the number of virtual camera inputs i can give? So can i make like 6 virtual nodes simulating 6 cameras as well?

Thanks!!
Jishnu

Yes. It should work

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.