Please provide complete information as applicable to your setup.
**• Hardware Platform (GPU)------>GPU
**• DeepStream Version------->6.1.1
**• TensorRT Version-------->8.4
**• NVIDIA GPU Driver Version -------->525.
• Issue Type( questions, new requirements, bugs)
-------> Python as a programming language.
My question is I have face a detector and after that I want to use head_pose model where this head_pose have 3layers (pitch, yaw,roll) I want to access three values from three output layers
How does this head_pose config file looks like ?
is this head_pose will work as a detector after face ?
or head_pose will work as a classifier
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
What do you mean about the config file? Does it mean the gst-nvinfer configuration file? If it is so, the parameters in the configuration file are decided by the model. Please make sure you know all necessary parameters listed in DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums.
The “detectors” means the models which outputs bboxes.
Seem head_pose will not output bboxes. The head_pose model can be used as secondary GIE after PGIE(E.G. face detection model). According to DeepStream’s definition, head_pose is the “others” model. The “network-type” of gst-nvinfer should be 100(which means others), “output-tensor-meta” should be enabled and the model output parsing should be customized in the nvinfer src pad probe function. We have a “faciallandmarks” model sample, which is the similar case as your head_pose model. You can refer to the sample deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.