Deepstream pose estimation: How can I create custom pose parse function?

Hi guys,
I’m trying to create pose parse function like in /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD. I see just have sample for object detection and classification. I’m newbie, can anybody help me?
Thanks so much!

1 Like

Hey, pls share your setup with us.

Yeah, currently DS only support to customzing post processor for detection and classification and isntance segmentaiton model.

Hi,
I just cloned your deepstream pose estimation and converted the model in pytorch → onnx → tensorrt engine.
Your application is really well but need to improve accuracy. Then I created parse_keypoint.cpp contain NvDsInferParseKeypoints function

bool NvDsInferParseKeypoints(
NvDsInferTensorMeta *tensor_meta,
float threshold,
float link_threshold,
Vec1D <NvDsInferKeypointDetectionInfo &keypointList)

NvDsInferKeypointDetectionInfo is declared in nvdsinfer.h

typedef struct{
float x, y;
} Point;

typedef struct{
std::vector keypoint;
int numOfPoints;
} NvDsInferKeypointDetectionInfo;

Then I created make file and make. It haven’t caused error.
Now I’m gonna test the .so shared lib.
I want to know whether there is a standard form to make custom parse output and whether I can get the output via NvDsInferTensorMeta (I see that all sample get the output layer via NvDsInferLayerInfo)?
**Update: I try using deepstream-app default by command
deepstream-app -c config_file_name but it throw error like this:


Thanks so much!

What accuracy issue are you observing?
For pose model, you only can do the post process via a gstreamer probe .

I got about 40% mAPs (IoU 0.5) in COCO with resnet model and it run not very well with my dataset. How can I retrain it or train a new custom model (with other backbone)?

For how to retrain a model, I think it’s not a deepstream question, maybe you can create a new topic in TLT forum to ask help.

1 Like

Thank you very much!