I calibrated 4 pinhole cameras for a surround view environment and created the Rig.json file.
Right now I’m able to get the bounding boxes of the detection of the cameras and I would like some advice in how to reproject the 2d bounding boxes to a 3d world.
For that I need to calculate the ray reprojection of one image point and calculate the collision with the ground plane and get a 3d point.
Any advice in this?
So are you asking guidance on how to convert the detected objects 2d bounding box co-ordinates to 3D co-ordinates using DW API?
We are working on a tutorial similar to your request. We will update you once it is done.
I would like to use the rig.json file generated in the calibration in order to do this, but I don’t know how to work with that file to generate the Intrinsics and extrinsics of the camera and then get those 3d coordinates.
Until the tutorial is finished would you mind to share some information on how to do it?
I have just finished the Tutorial for the transformation that will be included in the upcoming release.
Until then I am afraid we cannot share the information with you as they need to be reviewed internally first.
Hi I am also interested in this information. Is there any update on when this tutorial will be released?