Please provide the following info (check/uncheck the boxes after creating this topic): Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
We are using Sekonix SF3325-10X (2mp,60FOV) camera provided along with the NVIDIA Drive AGX Xavier Developer kit to detect lanes and objects using Drivenet and Lanenet DNNs as our front camera in vehicle.
We also want to calculate the distance to ego lanes and distance to detected objects (like car) for our LKA and Collision avoidance algorithm developments. For this purpose we are doing the static calibration of the sekonix camera using the tutorial provided in the driveworks documentation.
We have followed all the steps mentioned in the tutorial documentation about the prerequisites (for external camera we are using Sony FDR-AX700 4K HDR Camcorder), scene setup,data capturing and calibration tools and accordingly have performed the calibration, but we are not able to get correct calibration results and are getting below mentioned errors:
While doing the extrinsics calibration execution using Graph Calibration tool, it is giving huge reprojection errors while running the tool, which also can be seen in the external validation images giving bad results.
While generating rig file from graph calibration output, its is generating strange values which are no where near to what is expected as output.
So any idea what maybe wrong in this static camera calibration which we are performing? For reference I am attaching the calibration directory below.
I am attaching the complete log of graph calibration tool execution in which the errors can be seen in the output messages.
Also, I have checked the forum but didn’t find any similar issue and regarding the one which you pointed to mentions the correct number and placement of targets which we have already taken care in our calibration process.
So, kindly check the logs and let me know what is the issue with the graph calibration tool we are facing?
You may have changed zoom during the capture of external images, or something.
Also, the intrinsic constraints for the camera are not great. Too far away and not covering the whole screen.
We didn’t change the zoom during the capture of external images, the zoom settings was kept constant for all the images. Can you check if all the external images required are captured in correct manner or if any specific combination is missed to capture or is there any other issue with the external images.
For intrinsics constraints, we have used checkerboard pattern target and moved the checkerboard target from one side of the camera’s field of view to the other side both horizontally and vertically at various distances from the camera. So please suggest what other steps needs to be followed apart from this for capturing intrinsic calibration constraints by providing some sample images or videos for capturing the intrinsics data so that we can rectify your observations about current intrinsics we have in terms of what optimal distance we need to move the checkerboard target and how to cover the whole screen coverage. There is nothing mentioned on this perspective in the current drive software 10 documentation.
I don’t know what changed, but the intrinsics calibration of the external camera clearly failed, and really looks like you changed the intrinsics during the process. Maybe you used autofocus or image-stabilization was on.
Regarding camera intrinsics constraints, it’s not covering the whole field of view of the camera (the top half is empty). Some example images in the docs were not so good in that version.
Thanks for your inputs, based on your feedback,we had repeated the camera calibration process(both intrinsics & extrinsics) and found better results compared to the previous ones, this time around we found lesser re-projection errors and warnings while running the graph calibration tool and also in almost all validation images the green reprojection masks were matching the targets, attaching the logs and validation images for reference.
Wanted to know whether the latest validation results we obtained are good enough to conclude that the static camera calibration process is correct or any other points we are missing which can still improve the calibration results and avoid the current warnings and errors we are getting?
Like mentioned in the starting post we are planning to estimate distances of lanes and objects once the camera is calibrated, so up-to what approximate distances we can expect accurate results from the calibrated camera when we convert the image points to world coordinates, for this we are using the driveworks functions mentioned in the below post by @FabianWeise :