Run BACK-TO-BACK-DETECTORS REFERENCE APP under DeepStream SDK 5.0

Hello,

I am following this link to run back-to-back application on DeepStream5.0:

I downloaded these files as per this link:

  $ wget https://github.com/NVIDIA-AI-IOT/redaction_with_deepstream/raw/master/fd_lpd_model/fd_lpd.caffemodel
  $ wget https://raw.githubusercontent.com/NVIDIA-AI-IOT/redaction_with_deepstream/master/fd_lpd_model/fd_lpd.prototxt
  $ wget https://raw.githubusercontent.com/NVIDIA-AI-IOT/redaction_with_deepstream/master/fd_lpd_model/labels.txt

But, when run, it complains some model file is missing:

ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine open error
0:00:01.777347447 19394 0x55b6608e10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 2]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed
0:00:01.777487416 19394 0x55b6608e10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 2]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed, try rebuild

Would you please advise how to resolve this issue?

Thank you.

Your log is incomplete, you do not have engine file for first time running, the log is harmless,
it will try to rebuild the engine file, please provide full log.

Dear Sir,

please see below;

user@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deeuser@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors$ ./back-to-back-detectors /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

Using winsys: x11
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine open error
0:00:01.844387863 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 2]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed
0:00:01.844497175 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 2]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/…/…/…/…/samples/models/Secondary_FaceDetect/fd_lpd_model/fd_lpd.caffemodel_b1_fp32.engine failed, try rebuild
0:00:01.844545847 9857 0x55afe98a10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 2]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:01.937472031 9857 0x55afe98a10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 2]: build engine file failed
0:00:01.938755234 9857 0x55afe98a10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 2]: build backend context failed
0:00:01.938813154 9857 0x55afe98a10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 2]: generate backend failed, check config file settings
0:00:01.938863426 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.938891682 9857 0x55afe98a10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine2: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:primary-nvinference-engine2:
Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback

thank you.
Mei Guodong

Could you please provide below information.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hello,

Here is the info:

• Hardware Platform (Jetson / GPU) : Jetson Xavier NX
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only) : nvidia-jetpack, 4.4-b186
• TensorRT Version : 7.0
• NVIDIA GPU Driver Version (valid for GPU only) :

Thank you.

Did you make changes to the models?

Hello,

I did not make change to the models. I just download the “Back to Back detector application” from github. by default it was for DS 4.0. during compilation, I had to change the DS version to 5.0. compilation is successful. but report the reported error.

Thank you. Regards
Mei

Can you run first using trtexec for the model you downloaded?
/usr/src/tensorrt/bin/trtexec

Hello Amycao,

I am not familar with rtrexec, but I try to run this command with below options, and outputs as below.

Would it be possible you can download this Back-to-Back-Detectors Reference App and run in your DS5.0 and see if any issues?

Thank you.
Mei Guodong

./trtexec --model=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.caffemodel --deploy=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.prototxt --output=fd_lpd.caffemodel.output --batch=16 --saveEngine=caffe-model.trt
&&&& RUNNING TensorRT.trtexec # ./trtexec --model=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.caffemodel --deploy=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.prototxt --output=fd_lpd.caffemodel.output --batch=16 --saveEngine=caffe-model.trt
[07/15/2020-17:36:57] [I] === Model Options ===
[07/15/2020-17:36:57] [I] Format: Caffe
[07/15/2020-17:36:57] [I] Model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.caffemodel
[07/15/2020-17:36:57] [I] Prototxt: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.prototxt
[07/15/2020-17:36:57] [I] Output: fd_lpd.caffemodel.output
[07/15/2020-17:36:57] [I] === Build Options ===
[07/15/2020-17:36:57] [I] Max batch: 16
[07/15/2020-17:36:57] [I] Workspace: 16 MB
[07/15/2020-17:36:57] [I] minTiming: 1
[07/15/2020-17:36:57] [I] avgTiming: 8
[07/15/2020-17:36:57] [I] Precision: FP32
[07/15/2020-17:36:57] [I] Calibration:
[07/15/2020-17:36:57] [I] Safe mode: Disabled
[07/15/2020-17:36:57] [I] Save engine: caffe-model.trt
[07/15/2020-17:36:57] [I] Load engine:
[07/15/2020-17:36:57] [I] Builder Cache: Enabled
[07/15/2020-17:36:57] [I] NVTX verbosity: 0
[07/15/2020-17:36:57] [I] Inputs format: fp32:CHW
[07/15/2020-17:36:57] [I] Outputs format: fp32:CHW
[07/15/2020-17:36:57] [I] Input build shapes: model
[07/15/2020-17:36:57] [I] Input calibration shapes: model
[07/15/2020-17:36:57] [I] === System Options ===
[07/15/2020-17:36:57] [I] Device: 0
[07/15/2020-17:36:57] [I] DLACore:
[07/15/2020-17:36:57] [I] Plugins:
[07/15/2020-17:36:57] [I] === Inference Options ===
[07/15/2020-17:36:57] [I] Batch: 16
[07/15/2020-17:36:57] [I] Input inference shapes: model
[07/15/2020-17:36:57] [I] Iterations: 10
[07/15/2020-17:36:57] [I] Duration: 3s (+ 200ms warm up)
[07/15/2020-17:36:57] [I] Sleep time: 0ms
[07/15/2020-17:36:57] [I] Streams: 1
[07/15/2020-17:36:57] [I] ExposeDMA: Disabled
[07/15/2020-17:36:57] [I] Spin-wait: Disabled
[07/15/2020-17:36:57] [I] Multithreading: Disabled
[07/15/2020-17:36:57] [I] CUDA Graph: Disabled
[07/15/2020-17:36:57] [I] Skip inference: Disabled
[07/15/2020-17:36:57] [I] Inputs:
[07/15/2020-17:36:57] [I] === Reporting Options ===
[07/15/2020-17:36:57] [I] Verbose: Disabled
[07/15/2020-17:36:57] [I] Averages: 10 inferences
[07/15/2020-17:36:57] [I] Percentile: 99
[07/15/2020-17:36:57] [I] Dump output: Disabled
[07/15/2020-17:36:57] [I] Profile: Disabled
[07/15/2020-17:36:57] [I] Export timing to JSON file:
[07/15/2020-17:36:57] [I] Export output to JSON file:
[07/15/2020-17:36:57] [I] Export profile to JSON file:
[07/15/2020-17:36:57] [I]
[07/15/2020-17:36:58] [E] Could not find output blob fd_lpd.caffemodel.output
[07/15/2020-17:36:58] [E] Parsing model failed
[07/15/2020-17:36:58] [E] Engine creation failed
[07/15/2020-17:36:58] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --model=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.caffemodel --deploy=/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/fd_lpd.prototxt --output=fd_lpd.caffemodel.output --batch=16 --saveEngine=caffe-model.trt

I run it before, it works well, but times ago, will try to run on 5.0 version when free.
meanwhile, the trtexec command you used is not correct, about output, you should use output_bbox;output_cov, --output=output_bbox --output=output_cov, can you run again?

Hi Amycao,

Using your command, I am able to run the rtrexec and generated a caffe-model.trt file which I use as model-engine-file for Secondary_detector_config.txt.

It still reports some errors. I enclosed my two configure file here .

user@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors$ ./back-to-back-detectors /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

Using winsys: x11
Opening in BLOCKING MODE
0:00:03.383603117 10717 0x5594e66e10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/caffe-model.trt
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x270x480
1 OUTPUT kFLOAT output_bbox 16x17x30

ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov
0:00:03.383883085 10717 0x5594e66e10 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1545> [UID = 2]: Could not find output layer ‘output_cov’ in engine
0:00:03.383920045 10717 0x5594e66e10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Secondary_FaceDetect/caffe-model.trt
0:00:03.389698224 10717 0x5594e66e10 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 2]: Load new model:secondary_detector_config.txt sucessfully
0:00:03.389890320 10717 0x5594e66e10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:03.492198275 10717 0x5594e66e10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:03.493459044 10717 0x5594e66e10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:03.493505828 10717 0x5594e66e10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings
0:00:03.493559588 10717 0x5594e66e10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.493583428 10717 0x5594e66e10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: primary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine1: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:primary-nvinference-engine1:
Config file path: primary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

primary_detector_config.txt (3.4 KB) secondary_detector_config.txt (3.9 KB)

I tryied your config, and commented #model-engine-file= /opt/nvidia/deepstream/deepstream/samples/models/Secondary_FaceDetect/caffe-model.trt
it can run well.
Frame Number = 101 Vehicle Count = 7 Person Count = 4 Face Count = 0 License Plate Count = 0
please try again.

Hi Amy Cao,

thank you for your reply. I commented out the #model-engine-file, it reports below error:

user@user-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors$ ./back-to-back-detectors /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

Using winsys: x11
Opening in BLOCKING MODE
0:00:00.345620573 11077 0x55903ece10 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 2]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:02.234766759 11077 0x55903ece10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 2]: build engine file failed
0:00:02.236716356 11077 0x55903ece10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 2]: build backend context failed
0:00:02.236789343 11077 0x55903ece10 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 2]: generate backend failed, check config file settings
0:00:02.236888537 11077 0x55903ece10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:02.236959796 11077 0x55903ece10 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine2: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:primary-nvinference-engine2:
Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback

thank you.
Mei

Did you use TRT 7.1 or 7.0?

Hi Amy Cao,

• Hardware Platform (Jetson / GPU) : Jetson Xavier NX
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only) : nvidia-jetpack, 4.4-b186
• TensorRT Version : 7.0
• NVIDIA GPU Driver Version (valid for GPU only) :

Please noted:
JetPack 4.4 supports the upcoming DeepStream 5.0 release

  • DeepStream 5.0 Developer Preview is only supported with JetPack 4.4 Developer Preview.