Deploy only a classifier to a deepstream

am trying to deploy a classifier model that takes the whole frame and classify the frame if it’s “a class” or “not that class”,
so the input is a frame then classify that frame and get the classification output only ,
I am converting my model to onnx then to .engine format by using trtexc,
then i use the pgie config of the deepstream test1 and put my engine and onnx file at that config, also i edit the label file and the code inside deepstream_test1.py to put my labels such as “violence and not violence”,
i dnt know what to do next !, i read there’s other steps for creating a custom pareser and plugins and i dnt have a complete idea about the remaining steps, i feel confused if what i have tried is right or missing something, and also confused about the next step, hope someone will help

More details about the model layers:
Input Tensor:
(“input_1:0”, shape=(?, 224, 224, 3), dtype=float32)
it consists of Conv2d followed by Maxpooling

Output Tensor:
(“dense_2/Softmax:0”, shape=(?, 2), dtype=float32)
2 fully connected layers with softmax

**• Hardware Platform: GPU
**• DeepStream 5
• TensorRT Version= 7
**• NVIDIA GPU Driver Version: 450.66 **
• Issue Type is questions)

Hi @aya95,
Maybe you can refer to /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_secondary_vehicletypenet.txt, for softmax classifier, DeepStream has integrated the post parser.

Have you tried to run? Any output?

you mean that i should try to write my model engine at this config and this run this command,
deepstream-app -c config_infer_secondary_vehicletypenet.txt ?
Is that what u mean ?

when i run it with or without adding my model it gives me this error:
ERROR from src_bin_muxer: Output width not set

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi @aya95,
Sorry for delay since we were in long holiday in the past ~10 days!

Is there still anything needed from us? If there is, could you help clarify again?