How to have custom classifier neural network inside deepstream?

The current approaches I’ve tried is,

  1. Modified the code of deepstream-test2 app to work on a single classifier. The app runs perfectly fine. I made the classifier to run on the person. I can see the results.
  2. Train a custom classifier using caffe and I get a caffe model and prototxt file and a mean file.
  3. Modify the dstest2_sgie1_config.txt config file to take the custom classifier as input. Dropping the config below.
  4. Added labels.txt file according to my classifier

Here is the classifier config file,

[property]
gpu-id=0
net-scale-factor=1
# model-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/resnet18.caffemodel
# proto-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/resnet18.prototxt
# mean-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/mean.ppm
# labelfile-path=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/labels.txt
# int8-calib-file=/opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/cal_trt.bin
model-file=/home/mufasa/Documents/deepstream-test2/resnet18.caffemodel
proto-file=/home/mufasa/Documents/deepstream-test2/resnet18.prototxt
mean-file=/home/mufasa/Documents/deepstream-test2/mean.ppm
labelfile-path=/home/mufasa/Documents/deepstream-test2/labels.txt
batch-size=16
# 0=FP32 and 1=INT8 mode
network-mode=1
input-object-min-width=64
input-object-min-height=64
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=2
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
gie-mode=2

One difference that I did observe were the content of the mean.ppm file,
I used caffe’s compute_image_mean to get the mean file.

Coming to the actual issue, whenever I try to run the app after changing the config, I get the following error,

ERROR from element secondary1-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:dstest2-pipeline/GstNvInfer:secondary1-nvinference-engine:
Config file path: dstest2_sgie1_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

Any help would be appreciated. Thanks.

Apart from this I also have the classifier as a tensorrt engine file. Can I use this instead? If yes, how?

Edit: I’ve also changed the output-blob-names to the layer name in the model. But had forgotten to mention it.

gstnvinfer and low level code have been opensource. Can you try to add log to build new libs and debug?

For example, in nvdsinfer_context_impl.cpp

if (m_UniqueID == 0)
    {
        printError("Unique ID not set");
        return NVDSINFER_CONFIG_FAILED;
    }
if (initParams.numOutputLayers > 0 && initParams.outputLayerNames == nullptr)
    {
        printError("NumOutputLayers > 0 but outputLayerNames array not specified");
        return NVDSINFER_CONFIG_FAILED;
    }

I think it’s not hard to fix your issue.

Hey @ChrisDing, got it working thanks for the help