Can I use the models downloaded from ngc to directly make inferences on some images without training?

I downloaded the LPRnet from ngc via the cli but when I tried to use it to inference some images, it promted that -e -config something is required to specify. Does that means I have to train the model on my own dataset before inference?Also, what is the fomart for the Spec file metioned in the Document?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi kayccc, thanks for your response. Here is my driver version below:
NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2
I do not install DeepStream and TensorRT because I just try to use the pre-trained model to see if it can recognize my plates. Are they neccessary to be installed on my os in order to make inference?

Hi @user16037 ,
Could you take a look if the lprnet used in GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream can be a reference for you?

Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.