Your log is incomplete, you do not have engine file for first time running, the log is harmless,
it will try to rebuild the engine file, please provide full log.
user@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deeuser@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors$ ./back-to-back-detectors /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Using winsys: x11
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine open error
0:00:01.844387863 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 2]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed
0:00:01.844497175 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 2]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed, try rebuild
0:00:01.844545847 9857 0x55afe98a10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 2]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:01.937472031 9857 0x55afe98a10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 2]: build engine file failed
0:00:01.938755234 9857 0x55afe98a10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 2]: build backend context failed
0:00:01.938813154 9857 0x55afe98a10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 2]: generate backend failed, check config file settings
0:00:01.938863426 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.938891682 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine2: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:primary-nvinference-engine2:
Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
I did not make change to the models. I just download the “Back to Back detector application” from github. by default it was for DS 4.0. during compilation, I had to change the DS version to 5.0. compilation is successful. but report the reported error.
I run it before, it works well, but times ago, will try to run on 5.0 version when free.
meanwhile, the trtexec command you used is not correct, about output, you should use output_bbox;output_cov, --output=output_bbox --output=output_cov, can you run again?
Using your command, I am able to run the rtrexec and generated a caffe-model.trt file which I use as model-engine-file for Secondary_detector_config.txt.
It still reports some errors. I enclosed my two configure file here .
user@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors$ ./back-to-back-detectors /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Using winsys: x11
Opening in BLOCKING MODE
0:00:03.383603117 10717 0x5594e66e10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/caffe-model.trt
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x270x480
1 OUTPUT kFLOAT output_bbox 16x17x30
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov
0:00:03.383883085 10717 0x5594e66e10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1545> [UID = 2]: Could not find output layer ‘output_cov’ in engine
0:00:03.383920045 10717 0x5594e66e10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/caffe-model.trt
0:00:03.389698224 10717 0x5594e66e10 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 2]: Load new model:secondary_detector_config.txt sucessfully
0:00:03.389890320 10717 0x5594e66e10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:03.492198275 10717 0x5594e66e10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:03.493459044 10717 0x5594e66e10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:03.493505828 10717 0x5594e66e10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings
0:00:03.493559588 10717 0x5594e66e10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.493583428 10717 0x5594e66e10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: primary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine1: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:primary-nvinference-engine1:
Config file path: primary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline
I tryied your config, and commented #model-engine-file= /opt/nvidia/deepstream/deepstream/samples/models/Secondary_FaceDetect/caffe-model.trt
it can run well.
Frame Number = 101 Vehicle Count = 7 Person Count = 4 Face Count = 0 License Plate Count = 0
please try again.
thank you for your reply. I commented out the #model-engine-file, it reports below error:
user@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors$ ./back-to-back-detectors /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Using winsys: x11
Opening in BLOCKING MODE
0:00:00.345620573 11077 0x55903ece10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 2]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:02.234766759 11077 0x55903ece10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 2]: build engine file failed
0:00:02.236716356 11077 0x55903ece10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 2]: build backend context failed
0:00:02.236789343 11077 0x55903ece10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 2]: generate backend failed, check config file settings
0:00:02.236888537 11077 0x55903ece10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:02.236959796 11077 0x55903ece10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine2: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:primary-nvinference-engine2:
Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback