load caffe model failed

A weight file is trained by the network structure of bvlc_reference_caffee. Load the new model by modifying the config_infer_primary.txt configuration file in the deepstream, but there is an error when running deepstream. The error information is as follows:

gstnvtracker: Batch processing is OFF
0:00:26.838421023 11759 0x55cdf25c60 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:26.838740723 11759 0x55cdf25c60 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:37.882990507 11759 0x55cdf25c60 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Could not find output layer ‘conv2d_bbox’
0:00:38.204676539 11759 0x55cdf25c60 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:38.205247810 11759 0x55cdf25c60 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:38.205435056 11759 0x55cdf25c60 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/xiukd/sutpc_app_nano/config/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

can’t set pipeline to playing state.
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /home/xiukd/sutpc_app_nano/config/config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Hi,

The error indicates that Deepstream cannot find the output layer in your model:

generateTRTModel(): Could not find output layer 'conv2d_bbox'

Is this layer available within your model?
If not, please remember to update the output filename as well:

Ex. config_infer_primary.txt

[property]
...
output-blob-names=<b>conv2d_bbox;conv2d_cov/Sigmoid</b>

Thanks.

Make changes config_infer_primary.txt according to what you said,Can produce engine files.
[property]

output-blob-names=fc7;prob

But there were other mistakes,error log :
Creating LL OSD context new
0:08:34.343970341 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:parseBoundingBox(): Could not find output coverage layer for parsing objects
0:08:34.344352122 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:fillDetectionOutput(): Failed to parse bboxes
0:08:34.344829217 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:parseBoundingBox(): Could not find output coverage layer for parsing objects
0:08:34.345090005 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:fillDetectionOutput(): Failed to parse bboxes
0:08:34.345267354 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:parseBoundingBox(): Could not find output coverage layer for parsing objects
0:08:34.345446681 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:fillDetectionOutput(): Failed to parse bboxes
0:08:34.345616738 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:parseBoundingBox(): Could not find output coverage layer for parsing objects
0:08:34.345743304 31851 0x55a5015b20 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:fillDetectionOutput(): Failed to parse bboxes

My model network profile is :
name: “CaffeNet”
layer {
name: “data”
type: “Input”
top: “data”
input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }
}
layer {
name: “conv1”
type: “Convolution”
bottom: “data”
top: “conv1”
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
}
}
layer {
name: “relu1”
type: “ReLU”
bottom: “conv1”
top: “conv1”
}
layer {
name: “pool1”
type: “Pooling”
bottom: “conv1”
top: “pool1”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “norm1”
type: “LRN”
bottom: “pool1”
top: “norm1”
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: “conv2”
type: “Convolution”
bottom: “norm1”
top: “conv2”
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
}
}
layer {
name: “relu2”
type: “ReLU”
bottom: “conv2”
top: “conv2”
}
layer {
name: “pool2”
type: “Pooling”
bottom: “conv2”
top: “pool2”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “norm2”
type: “LRN”
bottom: “pool2”
top: “norm2”
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: “conv3”
type: “Convolution”
bottom: “norm2”
top: “conv3”
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
}
}
layer {
name: “relu3”
type: “ReLU”
bottom: “conv3”
top: “conv3”
}
layer {
name: “conv4”
type: “Convolution”
bottom: “conv3”
top: “conv4”
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
}
}
layer {
name: “relu4”
type: “ReLU”
bottom: “conv4”
top: “conv4”
}
layer {
name: “conv5”
type: “Convolution”
bottom: “conv4”
top: “conv5”
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
}
}
layer {
name: “relu5”
type: “ReLU”
bottom: “conv5”
top: “conv5”
}
layer {
name: “pool5”
type: “Pooling”
bottom: “conv5”
top: “pool5”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: “fc6”
type: “InnerProduct”
bottom: “pool5”
top: “fc6”
inner_product_param {
num_output: 4096
}
}
layer {
name: “relu6”
type: “ReLU”
bottom: “fc6”
top: “fc6”
}
layer {
name: “drop6”
type: “Dropout”
bottom: “fc6”
top: “fc6”
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: “fc7”
type: “InnerProduct”
bottom: “fc6”
top: “fc7”
inner_product_param {
num_output: 4096
}
}
layer {
name: “relu7”
type: “ReLU”
bottom: “fc7”
top: “fc7”
}
layer {
name: “drop7”
type: “Dropout”
bottom: “fc7”
top: “fc7”
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: “fc8”
type: “InnerProduct”
bottom: “fc7”
top: “fc8”
inner_product_param {
num_output: 1000
}
}
layer {
name: “prob”
type: “Softmax”
bottom: “fc8”
top: “prob”
}

Hi,

It looks like your model is a classifier rather than a detector.
You will need to set this option to true if the model is a classifier.

[property]
...
is-classifier=1

Thanks.

My model is Exactly a classifier.I want to ask that the model weight file resnet10.caffemodel of deep stream test is trained.How can I optimize it ?

thanks.

Hi,

Not sure if I understand your question correctly.

It looks like you want to run the model shared in #3 with Deepstream.
The model is a classifier so Deepstream will feed the whole frame into the inference.

You can use these two configure for your usecase.

Deepstream pipeline

# Copyright (c) 2019 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_1080p_h264.mp4
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=5
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0

[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_classifier.txt

[tests]
file-loop=0

Model definition

# Copyright (c) 2019 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   enable-dbscan(Default=false), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=[path/to/your/caffemodel]
proto-file=[path/to/your/prototxt]
#model-engine-file=[path/to/your/trt]
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
interval=0
gie-unique-id=1
output-blob-names=prob
is-classifier=1
classifier-threshold=0.9  # update your threshold here

Thanks.