DS 6.0 TLT cv inference pipeline models

Hello,

I was working on my Jetson Xavier DS 5.0 Jetpack 4.4, I created the facial landmark model by using the deepstream-test1-usbcam. What I did is followed the steps on this link TLT CV Inference Pipeline Quick Start Scripts — Transfer Learning Toolkit 3.0 documentation and generated the faciallandmarks model.plan. Then I used the faciallandmarks model.plan as the engine in the configuration file of DS 5.0.

I am switching to Jetson nano 2GB and I want to deploy this same model but using DS 6.0, Jetpack 4.6, TensorRT 8.
I am having the following error:

[UID = 2]: deserialize backend context from engine from file :/home/user/Downloads/tlt_cv_inference_pipeline_models/triton_model_repository/repository/faciallandmarks_tlt/1/model.plan failed, try rebuild
0:00:02.539593293 9660 0x166e5b60 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 2]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:03.018511359 9660 0x166e5b60 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 2]: build engine file failed
0:00:03.019620514 9660 0x166e5b60 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 2]: build backend context failed
0:00:03.019668693 9660 0x166e5b60 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 2]: generate backend failed, check config file settings
0:00:03.019715674 9660 0x166e5b60 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.019741040 9660 0x166e5b60 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: dstest1_sgie_drowsiness.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: dstest1_sgie_drowsiness.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Please how can I solve the issue?

Please refer to deepstream_tao_apps/apps/tao_others/deepstream-faciallandmark-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub instead.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.