Pose classification training for animals

Hi, the Pose Classification (in tao toolkit) seems to be intended for humans only. However, if i use 17 point coco graph_layout, can i use it for animals? If not, what is your recommendation for animal action recognition model that I can use as an SGIE within a deepstream pipeline?

Also, does DeepLabCut generated model work with Deepstream?

thanks.

• Hardware - Jetson
• Network Type - Pose Classification (action)

Please check if https://github.com/NVIDIA/tao_pytorch_backend/blob/e5010af08121404dfb696152248467eee85ab3a7/nvidia_tao_pytorch/cv/pose_classification/model/st_gcn.py#L246-L254 matches the animals dataset. You can also modify the code to add custom layout to make it compatible with the animals dataset.

Hi Morganh, thank you for your reply and confirmation.

Can you please help me locate this st_gcn.py file within the tao launcher starter kit. I can see that it is under “models” folder in v4. However, in tao toolkit getting started v5, it is not there.

Can you please tell me its location in v5?

It locates at /usr/local/lib/python3.8/dist-packages/nvidia_tao_pytorch/cv/pose_classification/model/st_gcn.py in nvcr.io/nvidia/tao/tao-toolkit:5.0.0-pyt

Thank you Morganh,

we are using the Amazon EC2 instance and followed the following article to install the Tao toolkit.

https://docs.nvidia.com/tao/tao-toolkit/text/running_in_cloud/running_tao_toolkit_on_aws.html

within this folder /usr/local/lib/python3.8/dist-packages, I can not locate " nvidia_tao_pytorch "
is there anything we are missing during installation?

also tested the following to make sure we are using the same docker…

$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.0.0-pyt /bin/bash
Then,
root@1fd116d14803:/opt/nvidia/tools# find /usr/ |grep st_gcn.py
/usr/local/lib/python3.8/dist-packages/nvidia_tao_pytorch/cv/pose_classification/model/st_gcn.py

Thank you Morgan,

we manage to load docker and can see the file

however, when we load the file it looks like this :

is it encrypted? or do we have to access it differently?

Yes, it is encrypted. Actually it is the same as https://github.com/NVIDIA/tao_pytorch_backend/blob/e5010af08121404dfb696152248467eee85ab3a7/nvidia_tao_pytorch/cv/pose_classification/model/st_gcn.py .
You can
$ mv /usr/local/lib/python3.8/dist-packages/nvidia_tao_pytorch/cv/pose_classification/model/st_gcn.py /usr/local/lib/python3.8/dist-packages/nvidia_tao_pytorch/cv/pose_classification/model/st_gcn.py.bak
Then copy the source code to it.
$ vim /usr/local/lib/python3.8/dist-packages/nvidia_tao_pytorch/cv/pose_classification/model/st_gcn.py

Thank you so much.

just last question… once we modify a file and run a sample Jupiter notebook located within getting_started_v5.0.0 … this will take an update that is applied to this py file?

Please trigger docker via below way.

$ docker run --runtime=nvidia -it -p 8888:8888 -v your/local/folder:docker/folder --rm nvcr.io/nvidia/tao/tao-toolkit:5.0.0-pyt /bin/bash

After modification, then trigger notebook(Assume the notebook locates at your/local/folder).
root@851cd3d21645:/opt/nvidia# jupyter notebook --ip 0.0.0.0 --allow-root

we want to prepare animal pose classification to identify a cow walking.

Our dataset in CVAT contains 10s videos with around 300 frames for each category.

When we export it, we get a COCO JSON file. How do we pass the exported coco format to this model? Is the category going to be label_map: for our dataset? are there any other changes that need to be applied to the JSON file before training?

Also, the PoseClassification model was originally trained on humans. We are doing transfer learning on animals. For inference, I suppose we will need 2 models. The first one is for animal pose detection and then the second one classification. For initial pose detection, which model is recommended?

Also, is there a readily available pipeline for object detection (PGIE) > pose estimation (SGIE1) > pose classification (SGIE2)? Hope my understanding of the pipeline is correct. Does this pipeline make sense?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The JSON file is expected to have skeleton’s keypoints.
In current pose classification network, firstly, it uses the DeepStream pose estimation app(https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/deepstream-bodypose-3d) to generate pose data, then it loads the pose data from this deepstream-bodypose-3d JSON file, extracts the pose sequences, applies normalization and preprocessing, and saves the resulting skeleton arrays as NumPy files.

If your JSON file already contains the skeleton’s keypoints, it will ease your work. If not, you can take a look at https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/deepstream-bodypose-3d, it leverages PeopleNet and BodyPose3DNet to run the app to generate Json file.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.