I’m creating an ALPR deepstream application on my Jetson Nano, and this is the inference part of the pipleline:
Pgie (Plate Detector) → Tracker → OCR (Classifier)
I am stuck with a couple of things:
For every tracked plate ID, I want to keep using (interval=0) the secondary classifier for inference until I get a classification confidece that’s higher than a certain threshold, and if I don’t, I want to simply keep the classification with the highest confidence.
How do I do this?
I made 2 versions of my deepstream app, a C app and a deepstream_app_config.txt. I want to Crop the frame with nvvideoConvert before sending it to the Plate Detector. Is it possible to do it in the deepstream_app_config.txt without diving into C code? If yes then is there an example?
After cropping the frame and passing it to the detector, How do I display the inference results on the original full frame?
• Hardware Platform: Jetson Nano 4GB
• DeepStream 5.1
• JetPack Version 4.5.1
• TensorRT Version 7.1.3
• Issue Type: questions