How to generate inference_sample.json file and the bbox annotations for fpenet?

Hello. There is an inference_sample.json in the directory tao-getting-started_v4.0.1/notebooks/tao_launcher_starter_kit/fpenet/specs.

However. I could not find out the methods to generate inference_sample.json for fpenet.

Besides. How to generate the bbox annotations in the inference_sample.json ?

        "filename": "/workspace/tao-experiments/fpenet/afw/xxx.png",
        "class": "image",
        "annotations": [
                "face_tight_bboxx": 672.10368330073106,
                "face_tight_bboxy": 225.97163120567382,
                "tool-version": "1.0",
                "face_tight_bboxwidth": 311.35730960707053,
                "face_tight_bboxheight": 270.25550579091134,
                "Occlusionx": 0.0,
                "class": "FaceBbox"

Thank you for your help in advance.

Please refer to Facial Landmarks Estimation - NVIDIA Docs

Thank you @Morganh Could FPEnet be used for facial landmark auto labeling like the picture below ?

As the picture above, the mouth, eyes, and nose are grouped by different colors and labeling with unique numbers.

How should I do to output this result ?

Currently, the TAO does not show this kind of layout.

Thank you for your reply @Morganh

In the description of Facial Landmarks Estimation, it states that FPENet (Fiducial Points Estimator Network) is generally used in conjunction with a face detector and the output is commonly used for face alignment, head pose estimation, emotion detection, eye blink detection, gaze estimation, among others.

However. It seems that the inference result of FPENet doesn’t show person’s eye gaze point ?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

FPENet doesn’t show person’s eye gaze point. Gazenet will do it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.