Facial Landmarks NVIDIA

Hello
I am working on AGX and I am trying to make my model work with Facial Landmarks Estimation from ngc with deepstream.
My primary consists of the facenet model provided by ngc and I am trying to make the secondary with the following configurations:

[property]
gpu-id=0
net-scale-factor= 0.0039215686274
#force-implicit-batch-dim=1
tlt-model-key=nvidia_tlt
tlt-encoded-model=/model.etlt
uff-input-blob-name=input_1
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=2
#infer-dims=3;40;160
#input-object-min-width=30
#input-object-min-height=30
input-dims=1;80;80;0
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
classifier-async-mode=0
classifier-threshold=0.51
process-mode=2
output-tensor-meta=1
#scaling-filter=1
#scaling-compute-hw=0

However it is failing to build the engine. Are there any others parameters to focus on?

Hi,

Could you share your environment with us first?

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

More, could you share what kind of TensorRT error do you meet?
Thanks.

Hardware Platform: Jetson nano AGX
DeepStream Version: 5.0
Jetpack version: 4.4
TensorRT: 7.1.3

I would like to know what are the configurations of the secondary model for the facial landmarks since it is failing to build the model.

Hi,

Could you also share the error message with us?
Since the cause of failure may vary from the use case.

Thanks.

ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.818591012 17494 0x8385150 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 2]: build engine file failed
Segmentation fault (core dumped)

Hi,

Could you share the complete log with us?

It seems that TensorRT meet some issue when open the given uff file.
Do you update the path accordingly?

Thanks.

it works now, I followed the steps mentioned in
https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/tlt_cv_inf_pipeline/quick_start_scripts.html#tlt-cv-quick-start-scripts
and created the engine.
Thank you!

Hi @melissa1 ,
I am facing similar issue as your while working on this can you please share your config file if possible that you used to setup facial landmark model.

Thanks