Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) - Jetson
• DeepStream Version - 6.2
• JetPack Version (valid for Jetson only) - 5.1.1
• TensorRT Version - 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) - questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am training a Custom Pose Classification model using Tao toolkit Pose Classification model for 4 classes ( A,B,C,D) . We have achieved an overall accuracy on test data of about 85 % . and individual accuracy is above 80 % for each class. ( We generated the json data using deepstream bodypose-3d and then tao pose_classification dataset_convert using 25D to convert it into numpy file and finally merged all the numpy files into single numpy file)
After that we have converted the tlt model into onnx and deployed it into Deepstream Pose Classification Pipeline which is available on deepstream tao apps. After deploying it , keeping all the params to be same as mentioned in default configs file, We end up getting mispredictions for all the test cases ( The videos were same as that used for the testing dataset in tao)
Q - What is going wrong in the overall process for the model to completely misprediction?
Q - In tao pose_classification dataset convert , do we need to convert into 3D or 25D for generating the numpy files?
Q - In deepstream tao apps Pose classification pipeline , the nvinferserver for config_infer_secondary_bodypose3dnet.yml produces the output - outputs: [
] .How do we know which one is being used by next model for pose classification ?