Can't deploy my custom classifier on deepStream

I am trying to run my keras .h5 model on deepstream 5. I took the provided python sample deepstream-test1 as a base code and trying to change it in order to fit my model needs. I did the following:

  • convert my model to onnx model then converted it to engine model.
  • create a labels.txt file for my classes
  • change the configuration file dstest1_pgie_config.txt to be >

[property]
gpu-id=0
process-mode=1 #primary
net-scale-factor=1
model-engine-file=test2.engine
labelfile-path=labels.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=1
network-type=1 #classifier
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=1
classifier-threshold=0.2
output-blob-names=dense_2

The code is running without errors but without any output too. When I print frame_meta.bInferDone it gives me zero. Why is that?

I am using GeForce GTX 1650. TensorRT 7.0.0.11. Driver Version 450.51.06 CUDA Version 11.0.

Thank you.

Hey, is your model a classifier model? also seems you need to customize the post process parser referring https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-ssd-parser

Yes my model is a 2 class classifier. I don’t see how the document you are referring to is relevant to my problem. Yes I think I should change my parser function. Can you point me to any sample code for classifier output parser.
Thank you.

My referrence link is for customized detection post process parser which similar as classifier, I think it’s simple to implement your own parser based on the sample.

Also you can refer c/c++ code for how to customize classifier parser, you can refer /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer_customparser/nvdsinfer_customclassifierparser.cpp

Okay will take a look. Thank you so much.

I changed the labels variable in
/opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer_customparser/nvdsinfer_customclassifierparser.cpp

Then added the .so file and the function name to the config file like so

[property]
gpu-id=0
process-mode=1 #primary
net-scale-factor=1
model-engine-file=yoloWay.engine
labelfile-path=labels.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=1 #FP32
network-type=1 #classifier
#num-detected-classes=2
interval=0
gie-unique-id=1
classifier-threshold=0.2
#output-blob-names=dense_2
custom-lib-path=nvdsinfer_customparser/libnvds_infercustomparser.so
parse-classifier-func-name=NvDsInferClassiferParseCustomSoftmax

Now I have this error

I debugged the output from nvdsinfer_customclassifierparser.cpp and it is parsing the output correctly. Now I want to read this output from the python code. I am using the same python script as the one in
/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1/deepstream_test_1.py

1 Like

Have you tried this with C/C++ sample to see if the issue persist?

No I didn’t. But I want to work in python

1 Like

Yeah, but we should make sure the lib can work well.

Okay will try.

Hi fadwa.fawzy,

Is this still an issue to support? Any result can be shared?

Thanks

No. Didn’t try yet.