Having trouble using custom classifier in DeepStream SDK

Hello, everyone.I’m trying to do classification on people detected in deepstream-test2-rtsp_out-1SGIE. Here’s what I did

  1. I downloaded gender detection model from this website https://talhassner.github.io/home/publication/2015_CVPR (thanks for their contributions), including .prototxt .caffemodel file.
    2.I changed the config file. Specifically, I modified the path of .prototxt and .caffemdel, created table.txt according to the classifier, changed the “output-blob-names”
    here is my config file

[property]
gpu-id=0
net-scale-factor=1
#model-engine-file=…/…/…/…/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_fp16.engine
model-file=…/…/…/…/samples/models/gender/net.caffemodel
proto-file=…/…/…/…/samples/models/gender/gender.prototxt
#mean-file=…/…/…/…/samples/models/Secondary_CarMake/mean.ppm
labelfile-path=…/…/…/…/samples/models/gender/labels.txt
#int8-calib-file=…/…/…/…/samples/models/Secondary_CarMake/cal_trt.bin

batch-size=16

0=FP32 and 1=INT8 mode

network-mode=1
input-object-min-width=64
input-object-min-height=64
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=1
is-classifier=1
output-blob-names=prob
classifier-async-mode=1
classifier-threshold=0.51

  1. I ran the app an got the following errors

Opening in BLOCKING MODE
Opening in BLOCKING MODE
Creating LL OSD context new
0:00:04.237389381 8414 0x556c1eb530 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
0:00:04.237660153 8414 0x556c1eb530 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger: NvDsInferContext[UID 2]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 173:3: Unknown enumeration value of “Softmax” for field “type”.
0:00:04.759799460 8414 0x556c1eb530 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 2]:log(): CaffeParser: Could not parse deploy file
0:00:04.759851698 8414 0x556c1eb530 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 2]:generateTRTModel(): Failed while parsing network
0:00:04.760376838 8414 0x556c1eb530 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 2]:initialize(): Failed to create engine from model files

Can anybody help me? Thanks a lot!

Hi,

This can be fixed by updating the way of type definition in the prototxt.
Please update the layer from this

layers {
  name: "prob"
  type: SOFTMAX
  bottom: "fc8"
  top: "prob"
}

Into this:

layers {
  name: "prob"
  type: "Softmax"
  bottom: "fc8"
  top: "prob"
}

Thanks.

Thanks for your advice. However, I tried this before and still there are some errors.

0:00:01.020048423 10817 0x55c8b6f530 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger: NvDsInferContext[UID 2]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 172:9: Expected integer or identifier, got: “Softmax”
0:00:01.103066025 10817 0x55c8b6f530 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 2]:log(): CaffeParser: Could not parse deploy file
0:00:01.103104359 10817 0x55c8b6f530 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 2]:generateTRTModel(): Failed while parsing network
0:00:01.103613220 10817 0x55c8b6f530 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 2]:initialize(): Failed to create engine from model files

the protptxt file is below

name: “CaffeNet”
input: “data”
input_dim: 1
input_dim: 3
input_dim: 227
input_dim: 227
layers {
name: “conv1”
type: CONVOLUTION
bottom: “data”
top: “conv1”
convolution_param {
num_output: 96
kernel_size: 7
stride: 4
}
}
layers {
name: “relu1”
type: RELU
bottom: “conv1”
top: “conv1”
}
layers {
name: “pool1”
type: POOLING
bottom: “conv1”
top: “pool1”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: “norm1”
type: LRN
bottom: “pool1”
top: “norm1”
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: “conv2”
type: CONVOLUTION
bottom: “norm1”
top: “conv2”
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
}
}
layers {
name: “relu2”
type: RELU
bottom: “conv2”
top: “conv2”
}
layers {
name: “pool2”
type: POOLING
bottom: “conv2”
top: “pool2”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: “norm2”
type: LRN
bottom: “pool2”
top: “norm2”
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: “conv3”
type: CONVOLUTION
bottom: “norm2”
top: “conv3”
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
}
}
layers{
name: “relu3”
type: RELU
bottom: “conv3”
top: “conv3”
}
layers {
name: “pool5”
type: POOLING
bottom: “conv3”
top: “pool5”
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: “fc6”
type: INNER_PRODUCT
bottom: “pool5”
top: “fc6”
inner_product_param {
num_output: 512
}
}
layers {
name: “relu6”
type: RELU
bottom: “fc6”
top: “fc6”
}
layers {
name: “drop6”
type: DROPOUT
bottom: “fc6”
top: “fc6”
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: “fc7”
type: INNER_PRODUCT
bottom: “fc6”
top: “fc7”
inner_product_param {
num_output: 512
}
}
layers {
name: “relu7”
type: RELU
bottom: “fc7”
top: “fc7”
}
layers {
name: “drop7”
type: DROPOUT
bottom: “fc7”
top: “fc7”
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: “fc8”
type: INNER_PRODUCT
bottom: “fc7”
top: “fc8”
inner_product_param {
num_output: 2
}
}
layers {
name: “prob”
type: “Softmax”
bottom: “fc8”
top: “prob”
}

I finally solved the problem referring to generateTRTModel(): Could not find output layer 'SOFTMAX'.

Hi, I have been trying to run the same model for gender prediction but it is not being able to create the engine file. So did your model ran well with the deepstream pipeline?