• Hardware Platform Jetson AGX Orin Developer Kit
**• DeepStream Version 7.1 **
**• JetPack Version 6.1 **
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type bugs
I have downloaded this model https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/facenet and tried running it with the deepstream-app
My app configuration
dsapp_config.txt (355 Bytes)
config_infer_primary.txt (3.3 KB)
The error log is
** WARN: <parse_source:675>: Unknown key 'rtsp-reconnect-interval-seconds' for group [source0]
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.168985916 7821 0xaaaacc5b3260 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
NvDsInferCudaEngineGetFromTltModel: UFF model support has been deprecated.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:02.216634188 7821 0xaaaacc5b3260 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
free(): double free detected in tcache 2
[1] 7821 abort (core dumped) deepstream-app -c dsapp_config.txt
How can I run tlt models. Everything I see online use onnx.