How can I replace my deepstream secondary_(carMake, CarColor, ...) to custom caffe model

I trained a googlenet (classifier) with my own data using NVIDIA-DIGITS.
As a result, * .caffemodel, * .prototxt labels.txt, mean.ppm files are obtained.
config_infer_secondary_car config file is modified and applied to source4_1080p_dec_infer ~
No error occurred during execution but the operation is not working properly
secondary_classifier does not work properly and occasionally labels on the first line of labels.txt appear
It seems to be a problem with the threshold, so changing the threshold did not change anything.
Please tell me how to apply my custom caffe model to Deepstream.

my environment: Deepstream 4.0 Xavier

Change the config

[secondary-gie0]
enable=1
model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes.txt

and config_infer_secondary_vehicletypes.txt to be your own.

Debug in nvdsinfer_context_impl_output_parsing.cpp → parseAttributesFromSoftmaxLayers()
Or
sources/libs/nvdsinfer_customparser/nvdsinfer_customclassifierparser.cpp

External Media

[secondary-gie0]
enable1
model-engine-file=../../models/Secondary_hk/hk.caffemodel_b16_int8.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_hk.txt

config_infer_secondary_hk.txt

[property]
gpu-id=0
net-scale-factor=1
model-file=../../models/Secondary_hk/hk.caffemodel
proto-file=../../models/Secondary_hk/hk.prototxt
#model-engine-file=../../models/Secondary_hk/hk.caffemodel_b16_int8.engine
int8-calib-file=../../models/Secondary_hk/cal_trt.bin
mean-file=../../models/Secondary_hk/mean.ppm
labelfile-path=../../models/Secondary_hk/labels.txt
batch-size=16
model-color-format=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
is-classifier=1
process-mode=2
output-blob-names=softmax
classifier-async-mode=1
classifier-threshold=0
input-object-min-width=128
input-object-min-height=128
operate-on-gie-id=1
operate-on-class-ids=0

these are my code snippet
I think this code make to run properly
But it doesn’t work well with the above problem!

What’s the problem ?

I’ve noticed this issue, when using the engine file for the classifier without the mean file. Basically every object gets classified as a single label. Should dig more on it.

@Chris

Functions refference document is available for public ?

Ravi Do you mean this https://docs.nvidia.com/metropolis/index.html ?

Hi Chris,

The image below

External Media

Sorry. I don’t get what you mean.

@RaviKiranK,

the only things you need to do to have a custom classifier inside deepstream is, take a look at deepstream-test2 app,

  1. modify the code to work on single classifier, i.e, remove all the other sgie.
  2. Train your custom classifier with or without the mean file config
  3. The mean file generated by caffe and the mean file used in deepstream are entirely different. So don't get confused with them, which happened to me. There are offset params to specify mean substraction
  4. Modify the sgie config file, so that the path points to the newly trained models.
  5. Take a look at your model-color-format params, as this could lead to certain issues during model inferencing.
  6. Also do change the output blob name param in the config file of the classifier.
  7. If the above steps are followed, you would get the classifier results on screen.

I think this should cover it. However, the only thing I’m struggling with right now is fetching the results of the classifier in the probe. The LabelInfo struct seems to print junk, maybe I’m initializing with the wrong data.

I could not find the diagram you posted in metropolis or other sections so I wanted to know the source for the diagram.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html