buildEngineFile() failed when running nvidia facenet model on deepstream

• Hardware Platform Jetson AGX Orin Developer Kit
**• DeepStream Version 7.1 **
**• JetPack Version 6.1 **
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type bugs

I have downloaded this model https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/facenet and tried running it with the deepstream-app

My app configuration
dsapp_config.txt (355 Bytes)
config_infer_primary.txt (3.3 KB)

The error log is

** WARN: <parse_source:675>: Unknown key 'rtsp-reconnect-interval-seconds' for group [source0]
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.168985916  7821 0xaaaacc5b3260 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel: UFF model support has been deprecated.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.216634188  7821 0xaaaacc5b3260 ERROR                nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
[1]    7821 abort (core dumped)  deepstream-app -c dsapp_config.txt

How can I run tlt models. Everything I see online use onnx.

TRT-10 no longer supports tlt models.

So if you want to use the facenet model, please re-flash JP-6.0 and DS-7.0, fallback to TRT-8.6 and then refer to this example.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.