Hi, the Pose Classification (in tao toolkit) seems to be intended for humans only. However, if i use 17 point coco graph_layout, can i use it for animals? If not, what is your recommendation for animal action recognition model that I can use as an SGIE within a deepstream pipeline?
Also, does DeepLabCut generated model work with Deepstream?
Hi Morganh, thank you for your reply and confirmation.
Can you please help me locate this st_gcn.py file within the tao launcher starter kit. I can see that it is under “models” folder in v4. However, in tao toolkit getting started v5, it is not there.
within this folder /usr/local/lib/python3.8/dist-packages, I can not locate " nvidia_tao_pytorch "
is there anything we are missing during installation?
also tested the following to make sure we are using the same docker…
just last question… once we modify a file and run a sample Jupiter notebook located within getting_started_v5.0.0 … this will take an update that is applied to this py file?
After modification, then trigger notebook(Assume the notebook locates at your/local/folder).
root@851cd3d21645:/opt/nvidia# jupyter notebook --ip 0.0.0.0 --allow-root
we want to prepare animal pose classification to identify a cow walking.
Our dataset in CVAT contains 10s videos with around 300 frames for each category.
When we export it, we get a COCO JSON file. How do we pass the exported coco format to this model? Is the category going to be label_map: for our dataset? are there any other changes that need to be applied to the JSON file before training?
Also, the PoseClassification model was originally trained on humans. We are doing transfer learning on animals. For inference, I suppose we will need 2 models. The first one is for animal pose detection and then the second one classification. For initial pose detection, which model is recommended?
Also, is there a readily available pipeline for object detection (PGIE) > pose estimation (SGIE1) > pose classification (SGIE2)? Hope my understanding of the pipeline is correct. Does this pipeline make sense?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
The JSON file is expected to have skeleton’s keypoints.
In current pose classification network, firstly, it uses the DeepStream pose estimation app(https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/deepstream-bodypose-3d) to generate pose data, then it loads the pose data from this deepstream-bodypose-3d JSON file, extracts the pose sequences, applies normalization and preprocessing, and saves the resulting skeleton arrays as NumPy files.