Mispredictions with custom trained PoseClassificationNet using deepstream-tao-apps

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Jetson
• DeepStream Version - 6.2
• JetPack Version (valid for Jetson only) - 5.1.1
• TensorRT Version - 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) - questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am training a Custom Pose Classification model using Tao toolkit Pose Classification model for 4 classes ( A,B,C,D) . We have achieved an overall accuracy on test data of about 85 % . and individual accuracy is above 80 % for each class. ( We generated the json data using deepstream bodypose-3d and then tao pose_classification dataset_convert using 25D to convert it into numpy file and finally merged all the numpy files into single numpy file)

After that we have converted the tlt model into onnx and deployed it into Deepstream Pose Classification Pipeline which is available on deepstream tao apps. After deploying it , keeping all the params to be same as mentioned in default configs file, We end up getting mispredictions for all the test cases ( The videos were same as that used for the testing dataset in tao)

Q - What is going wrong in the overall process for the model to completely misprediction?

Q - In tao pose_classification dataset convert , do we need to convert into 3D or 25D for generating the numpy files?

Q - In deepstream tao apps Pose classification pipeline , the nvinferserver for config_infer_secondary_bodypose3dnet.yml produces the output - outputs: [
{name: “pose2d”},
{name: “pose2d_org_img”},
{name: “pose25d”},
{name: “pose3d”}
] .How do we know which one is being used by next model for pose classification ?

Do you use bodypose-3d output to train the Pose Classification model?

Please consult TAO toolkit forum. Latest Intelligent Video Analytics/TAO Toolkit topics - NVIDIA Developer Forums

It depends on how you trained the model.

Yes ,we used bodypose-3d output to train the train classification Model. Our Model is able to get the accuracy of greater than 90% on test dataset. When the same test/train video is used to validate the model performance which is integrated with DeepStream Pose-Classification pipeline on an edge device, the model completely gives false results.

Q1. Could you tell me if we are missing anything while integrating the model?
Q2. Could you tell me what parameters needs to be configured to achieve the same performance which we were getting if we standalone test the PoseClassification Model.

Looking forward to you reply.