Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
1.9.3.10904
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Hi
I was wondering if its possible to configure the rig.json to stream via camera.virtual ie replay a video? if so can you please share an example of how parameters would look like? I am using camera_extra as an example.
The reason of asking is in a larger scope of things when i want to sync say group A and group C cameras together to stitch together as one and send to a object detection and tracker pipeline as given in the examples, I would like to switch between actual camera and pre recorded video to evaluate performance .
If the above is not possible would your best suggestion be to expand the available object detection example to read from rig.json for multi-camera since it has only support for 1 camera.
Dear @jishnuw,
Yes, it is possible to configure rig.json with two virtual cameras. I would like to know if the camera data is recorded simultaneously for synchronisation of frames across two cameras?
You can protocal as camera.virtual and use file=/path/to/file for each camera.
Let’s say we use the object detection pipeline example. Pipeline example only suports 1 camera as of now. We have say 4 cameras 2 in group A and 2 in group C. We wish to stitch the images and then give as a single image. I am referring to 6.0.8 nvsipl_camera fsync option for syncing cameras between the two groups and use camera_extra as a reference. Meanwhile we have some video samples how it should look like when stitched. Hence we could keep evaluating the performance of live camera feed vs recorded video. This was the intention. Please let me know if you foresee some challenges.
One more question, is there a limit to the number of virtual camera inputs i can give? So can i make like 6 virtual nodes simulating 6 cameras as well?