Mispredictions with Custom trained PoseClassificationNet when integrated with DeepStream Pose Classification

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version - 6.3
• JetPack Version (valid for Jetson only) - 5.1.1
• TensorRT Version - 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have trained a custom PoseClassificationNet using Nivida tao toolkit for 3 classes and achieved an accuracy > 90% for each class.

When I integrate it with DeepStream Pose Classification, The results are totally wrong even for the videos that were used for training PoseClassification.

Q1. What are configurable parameters that we should tune to get the better results?

Q2. In the deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub file, we have clearly mentioned the
frames-sequence-length=30 but still the prediction is happening for every frame instead of occuring every 30 frames which might affect the accuracy of the model.
Could you help me to understand what is the use of frames-sequence-length if the predictions are meant to happen every frame?

How did you get the accuracy value?

You may need to customize your own preprocess according to your model.

Frames before this value are sent with default data, so they may not be accurate.

Hi,

How did you get the accuracy value?

I Trained the Tao PoseClassificationNet on my data and got the accuracy of more than 90% for each class on the test dataset.

The problem arises when I integrated with DeepStream Pose Classification . After Integration when I test on training video which was used to train the Tao PoseClassificationNet , the results are totally wrong . I even tested for different frame-sequence-length .

The same video when tested on solely on Tao PoseClassificationNet gives the correct prediction and not when tested with the DeepStream Pose Classification Pipeline

You may need to customize your own preprocess according to your model.

Could you please elaborate it in more detail about what kind of preprocess is required to order to improve the accuracy of the pipeline and where does it needs to be added?

Frames before this value are sent with default data, so they may not be accurate.

Could you please let me know where is the code for this in the DeepStream Pose Classification scripts and how we can disable it?

Thanks

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Can you describe in detail how you tested on solely with Tao and send the source video to us?

The main source code is as follows nvdspreprocess_lib.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.