To add deepstream-bodypose-3d model into deepstream-imagedata-multistream python app as sgie

I like to add deepstream-bodypose-3d model into deepstream-imagedata-multistream python app as sgie. My queries are (1)where is sample reference for adding sgie in python apps (2) what components are still need to add into pipeline in addition to deepstream-bodypose-3d model?

Followed runtime_source_add_deletepython app to add sgie. config_infer_secondary_bodypose3dnet.txt was used for sgie. The following are steps for adding sgie.
(1)sgie = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
if not sgie:
sys.stderr.write(" Unable to make sgie \n")
(2)sgie.set_property('config-file-path', "config_infer_secondary_bodypose3dnet.txt")
(3)print(“Adding elements to Pipeline \n”)
pipeline.add(pgie)
pipeline.add(tracker)
pipeline.add(sgie)
pipeline.add(tiler)
pipeline.add(nvvidconv)
pipeline.add(filter1)
pipeline.add(nvvidconv1)
pipeline.add(nvosd)
pipeline.add(sink)
(4)
print(“Linking elements in the Pipeline \n”)
streammux.link(pgie)
pgie.link(tracker)
tracker.link(sgie)
sgie.link(nvvidconv1)
nvvidconv1.link(filter1)
filter1.link(tiler)
tiler.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(sink)

What did I miss?
The app is running with no error.
For my application, I don’t need to display skeleton. Just need to collect meta infos of key points.
How to collect key points of each detected person?

Could you attach the whole pipeline you need and describe your requirements in detail?

My pipeline is as follows.


    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(filter1)
    pipeline.add(nvvidconv1)
    pipeline.add(nvosd)
    pipeline.add(sink)

I have a tiler_sink_pad_buffer_probe to extract meta infos for bounding boxes and object ids.
SGIE is for bodypose-3d model.
Can I extract keypoints from SGIE from the same probe? Or I need sgie_src_pad_buffer_probe?

How to extract keypoints from SGIE?
The sample code is for cpp. Now in python, how can I get keypoints?

We don’t have a python version to parse the point at the moment. You need to learn the algorithm in the c code and then convert it to the python version.

Thanks. I’ll do that. Can I parse key points in tiler probe?

Yes. You can get the tensor output in the probe function. But you need to implement the python version of parsing the output into the key points.

I’m doing like extracting current existing key points parsing class in a separate cpp file and wrapped using pybind11 to interface to python probe.
Those framemeta, object meta and tensor meta can be used as parameters to call cpp from python. What do u think?
I need to wrap OneEuroFilter and parse_25dpose_from_tensor_meta

I don’t think this is going to work. Since your project is based in Python. You need to get the tensor output in Python and pass the data to C++ first.
We recommend that you use our c/c++ code directly.

Basically I just need 3D Center point of human body only.
I don’t need whole skeleton.
What could be best way? Any suggestions?

The reason why I can’t use c++ app is that I have implemented spatio-temporal activity recognition using deepstream Python app.
The Python app can interface to 20 cctvs concurrently and working well.
Now is I like to add to have 3D centre points of individual person.

Can you roughly use the center point of the object bbox as the 3D Center point? It might be easier this way.

But that doesn’t give me z information. x and y only

Yes. This is only a rough substitute for the 3D Center point. If you want precise point, you have to do binding for the parse_25dpose_from_tensor_meta yourself.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.