How to obtain the real world coordinate from bounding box

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson )
• DeepStream Version(latest)
• TensorRT Version(latest)
• Issue Type( questions)
**• Requirement details

I would like to implement a deep stream pipeline which after having the bounding box, The real-world coordinate of the object shown on the output. I have the camera coordinate through GPS and I have the velocity of the camera. The camera is moving and the detected object is moving as well.

Regards

Seems you need to use 3D model detection, but current DS cannot support it.

Hi, Actually no I do not need the 3D model, I need to know the kind of 3D camera calibration with a moving object and camera + Depth

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

OK, could you share more specific info with us, so we can help further.