I’m creating an ALPR deepstream application on my Jetson Nano, and this is the inference part of the pipleline:
Pgie (Plate Detector) → Tracker → OCR (Classifier)
I am stuck with a couple of things:
For every tracked plate ID, I want to keep using (interval=0) the secondary classifier for inference until I get a classification confidece that’s higher than a certain threshold, and if I don’t, I want to simply keep the classification with the highest confidence.
How do I do this?
I made 2 versions of my deepstream app, a C app and a deepstream_app_config.txt. I want to Crop the frame with nvvideoConvert before sending it to the Plate Detector. Is it possible to do it in the deepstream_app_config.txt without diving into C code? If yes then is there an example?
After cropping the frame and passing it to the detector, How do I display the inference results on the original full frame?
We already have some policy to decide if sgie should do inference when tracker enabled, you can check
Why you need to do that, sgie just do the same thing before do inference, you can check gstnvinfer.cpp → should_infer_object
Yes I’m aware of the policy of reinfering if the object size gets 20% bigger, but I want to change it, I want the classifier to keep reinfering until the probability reaches a certain threshold.
So I dug into gstnvinfer.cpp → should_infer_object, and to implement my custom policy I need to access the classifier metadata in order to get the classification probabilty.
Problem is if you look at that for loop, the “classifier_meta” variable is always null, am I doing something wrong? Is that not the right way to access Classifier metdata?