Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 6.3 • JetPack Version (valid for Jetson only) 5.1.1 • TensorRT Version 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) Questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi, I am running a DeepStream Pose Classification pipeline on Jetson device. I tested it on tons of videos . For every video the prediction is always "Walking irrespective of the person performing any other activity ( For eg - Sitting, Standing).
I have even tried to change the value of frame-sequence-length starting from min value of 3 to max value of 300 in steps of 10 in config_preprocess_bodypose_classification.txt. Even though after testing it for difference sequence the prediction is always walking. [user-configs] #actual sequence length of frames frames-sequence-length=300
I am attaching the snapshot of the video in which person is sitting for entire video. Even though for this video the prediction is walking with around 70% confidence.
Q2 When we had clearly mentioned frame-sequence-length to be of N frames ( N can be 30. 60, 90 etc till max 300). Then why are predicitions happening for every frame instead of happening for set of frames as mentioned in frame-sequence-length?
The pre-trained BodyPose3DNet | NVIDIA NGC model is not precise enough. Sometimes the model does not output correct pose key points. The Pose Classification | NVIDIA NGC model needs successive pose key points to do the inferencing, the unstable key points may cause the wrong inferencing result from the pose classifier model. You may re-train the BodyPose3DNet | NVIDIA NGC model or use other bodypose 3d models to get more precise and stable output.
Hi @Fiona.Chen , Thanks for the reply. Yes I agree that the BodyPose3d output is quite unstable so I have gone through all the resources regarding Tao toolkit for BodyPose3dNet but I didn’t find any training scripts for it.
Could you please share more information on how we can train it?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
The suggestion is to find another bodypose3d model. The trainning scripts of BodyPose3DNet | NVIDIA NGC is not open source now.