How to debug gstnvinfer with custom model?

Can you provide the following information?
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only) 455.45.01

glog.tgz (2.1 MB)
we are using google logging to collect log, just “bash cmake … &&make install” in the build folder of glog

I can build the libzdxfPlugins.so now. And I set it to nvinfer config file to add customized IPlugin as described in Using a Custom Model with DeepStream — DeepStream 5.1 Release documentation (nvidia.com), but it does not work. You need to write a correct customized plugin.

[property]
gpu-id=0
net-scale-factor=1.0
offsets=102.9801;115.9465;122.7717
model-file=../vgg16_ssh.caffemodel
proto-file=../vgg16_ssh.prototxt
labelfile-path=../labels.txt
model-engine-file=../ssh_vgg.engine
force-implicit-batch-dim=1
infer-dims=3;540;960
maintain-aspect-ratio=1
batch-size=1
process-mode=1
model-color-format=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=1
custom-lib-path=../libzdxfPlugins.so
interval=0
gie-unique-id=1
output-blob-names=ssh_boxes;ssh_cls_prob
## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other

And the error is :
0:00:00.618596824 14519 0x560d0c095150 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1819> [UID = 1]: Trying to create engine from model files
ERROR: Failed while parsing caffe network: /home/nvidia/deepstream-test/vgg16_ssh.prototxt
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.

Please make sure your TensorRT and cuda versions are compatible to deepstream. Quickstart Guide — DeepStream 5.1 Release documentation (nvidia.com)