Any custom parser example for Facial Landmark Estimator?

With the recently announced Facial Landmark Estimator (FPENet) Model Card, I would like to experiment the model with DeepStream apps. Although the landmark estimator model has reference sample with TLT-CV inference pipeline, I don;t want the use the TLT-CV API inside docker, rather want to directly use it with a PGIE face detector model in cascade with Deepstream pipeline.

As the deepstream_lpr_app sample has a sample application with a custom parser sample named nvinfer_custom_lpr_parser.cpp , is there any sample for parsing the Facial Landmark Estimator model to run directly on Deepstream apps? Suggestions are much appreciated.

1 Like

For fpenet, please use
https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/facial_landmarks_estimation.html#inference-of-the-model
and
https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/facial_landmarks_estimation.html#deploying-to-the-tlt-cv-inference-pipeline

There is not a sample application in deepstream for inference.

I see on the examples, the use of jrt API

#include "jrt/api/TLTCVAPI.hpp"
#include "jrt/vision/Payloads.hpp"
#include "jrt/vision/FaceDetectPayload.hpp"
#include "jrt/vision/FacialLandmarksPayload.hpp"
#include "jrt/vision/Requests.hpp"

Where can I find the hpp files to build my own parser function to run with Deepsteam apps?

Okey I got the files inside my docker folder. But can I use the jrt outside docker as a standalone SDK on Jetson ?

Officially TLT only provides the API in the client docker.
https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/tlt_cv_inf_pipeline/quick_start_scripts.html#tlt-cv-quick-start-scripts

@neuroSparK Did you find a way to deploy Facial Landmark Estimator with deepstream apps? I would like to use facial landmark together with pose estimation.