Run depthnet and detectnet simultaniously on the same camera

Hey guys,
I’m trying to run depthnet and detectnet (jetson-inference/docs/images at master · dusty-nv/jetson-inference · GitHub) at the same time on the same camera. I’m hoping to have detectnet designate people, and then using depth from depth net at that centroid coordinate, output the image feed, as well as the distance to the person. I’ve gotten both running independently, just not sure how to go about using deepstream to run them together. Any help would be appreciated!

System:
Jeston Orin Nano Dev
Ubuntu 20.04.6
Deepstream-6.1

Do you mean you want to run people detection first and then get the distances to the people in the same DeepStream pipeline?

The usage depends on the models’ features.

Seems the detectnet outputs bboxes and depthnet outputs point loud of the whole image. You may use the two models in the same pipeline. The depthnet output point cloud can be stored in the frame user meta. NVIDIA DeepStream SDK API Reference: _NvDsFrameMeta Struct Reference | NVIDIA Docs

Then you can get both the bboxes and the point cloud in the downstream from NvDsBatchMeta. You can implement your own algorithm to the data you got.

Thank you, I will look into that. Do you have any suggestions on how I could transfer over the mobilenetV2 and the depthnet to a model that Deepstream can use? Thank you!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

DeepStream support several model types, please refer to Gst-nvinfer — DeepStream documentation 6.4 documentation.

The dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. (github.com) is open source. You can get the idea of how to download the models and the inferencing preprocessing and postprocessing from the code. E.G. The depthnet model download is jetson-inference/tools/download-models.sh at master · dusty-nv/jetson-inference (github.com). The inferencing preprocessing is jetson-inference/c/depthNet.cpp at master · dusty-nv/jetson-inference (github.com). The output of the depthnet onnx model is a mask matrix. So there is no extra postprocessing needed.

The DeepStream SDK has provided document and samples for how to integrate different models with DeepStream.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.