Is the DS SDK having any support for distance/angle calculations to detected objects?

Hi,

I know about how to determine distance and angle between camera and an object in the image using OpenCV undistortion and homography calculations. But this only works reliably using cameras, which are not changing its viewport. It cannot work reliably for cameras, which change the perspective, which is e.g. the case if the camera is moving due to the movement of the carrier…

The question is: Does the DS SDK hold some nice functionality to support distance/angle calculations w.r.t. the camera?

Moving “carrier” do you mean a car/drone ? or something simpler like a PTZ camera ?

Hi @rsc44, thanks for the follow up. More specifically a forklift. Meanwhile I think I have found a satisfying solution, which requires me to know the real height of an object, the focal length and sensor height of the camera. That shows pretty good results.

Thanks

@foreverneilyoung You should checkout the isaacSDK it seems perfect for your use case. Plus it will give you the option of integrating more sensors than just cameras.

If your objective is to only use cameras.

  1. I commend the effort.
  2. There’s a lot of open research on the topic, i have some experience, but i have usually relied on NN to assist. If you’re looking for something more lightweight, you’ll need to get creative. As your options will vary depending on the deployment environment. ( I know you said forklift, but those could be used in a large variety of environments.) I recommend starting here for NN based sol: https://github.com/harshilpatel312/KITTI-distance-estimation
    I recommend starting here for lightweight/creative sol: Redirect

Thanks for the interesting links. I’m pretty satisfied with the inference results of DeepStream. I was just looking for a lightweight solution to retrieve the distance of a detected e.g. person. It is all just about collision prevention, there is no need for AGV or tracking or navigation.

@foreverneilyoung Hmm, shot in the dark but maybe just use Pose Estimation ? You could then perform Triangulation utilizing the knees, shoulders, or elbows ? (Allowing a person to be partially occluded, yet still kept safe)

Youd have to determine an average distance between someones knees, but i think it could work

I have a clue what you are heading to, but not more :) Right now I’m using an inference solution based on the DeepStream Python USB camera sample pimped up to be able to use 3 cameras simultaneously. I was having an homography based solution for the distance estimation (I took the center point of the base line of the detection rectangle as measure point). This worked fine in lab but failed under the dynamic conditions of a forklift operation. I need to compute the distance for a detection as it comes in, so I doubt I would have much time for yet another NN to use. I know, my current solution is not as accurate as it should be and fails, if the person is not detected in full size, but it is accepted ATM.

Thanks for your input, very appreciated.

Yeah man i know what you mean, it always works perfect in the lab lol

Somethin like this could be adapted for your use case

https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE

Looks great. I’m currently doing inference at 30 fps for 640x480 per camera. So it is a pretty fast detection. Don’t you think such an additional NN computation has it’s cost and will finally tear down the FPS to some very low value? Despite the fact, that there would also be a resource issue with the required GPU? (not sure for the latter, maybe there is still some GPU capacity besides DS, I don’t know that exactly)

You could use deepstream for pose estimation, (theres a sample app for it and it has good fps) then do the post processing outside DS.

Would you have a link to a sample aside?

I dont have one one me, but it should be under /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/

You mean this? Pose Estimation with DeepStream