Object distance

When I use the googlenet(https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect.md) to dectect the object, how can I use python to write detect object’s distance? And how can I use C to write detect object’s distance?

To detect object distance accurately you generally need at least more than one camera, sometimes with a light pattern as an aid. Lidar can also be used. Here is an example setup on the Nano by Jetson Hacks (Kangalow on the forum here):

https://www.youtube.com/watch?v=aFIix3nmYIA

When I use c270 loitech webcam,can the program measure the object distance?

How precise do you need it to be? Because with a single camera it’s a much more difficult problem and your accuracy isn’t going to be very good.

Sorry, I ask a question. Can I change the python program(https://github.com/dusty-nv/jetson-inference/blob/35c0d57766f90faddae372ff1d807e9fc001d26b/python/examples/imagenet-camera.py#L58) to measure the object distance?

The best you can do, I believe, is make a guess at the object size, and if you have a network that’s trained for it you can make a guess at a depth map based on what the network thinks an object is.

Can that program be modified to do that? Sure, but you’ll need to use another model, and probably need to modify more than just the Python.

A trained model and an example of what’s possible with one camera can be found here. You’ll have to adapt that to Nvidia’s code. Warning: this is a very hard problem.

That should also give you an idea of the sort of accuracy that’s possible. It’s a rough guess, basically. It’s suitable for generating depth maps for post processing photos. And it doesn’t do that particularly well. I suspect this is why so many new phones come with multiple cameras instead. The Pixel 2’s Camera app did an OK job with a single camera and similar techniques for the same purpose, but it failed with a lot of things (eg. hair).