Deploy nvidia pre-trained yolov4 model to Tao Trition

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
Ubuntu20, x86, RTX3090
• Network Type
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

There’s a Nvidia pretrained model lpdNet for detect car plate, I’m wondering how to directly download and use the Version 2.0 models based on YoloV4-Tiny with FP16 accuracy of this model to start the triton server infer?
This is what I understand:

  1. Convert the .etlt to .plan
    by modifying of adding yolov4_tiny_ccpd_deployable.etlt convert.
    Could you provide a sample?
  2. Adding PostProcess
    All object detection model need add custom postprogrss, is that correct?
    Could you provide a sample?

The LPDNet version 2.0 model is based on YOLOv4_tiny network. It is also very similar to YOLOv4 network. So, your work is to implement YOLOv4 network in triton by yourself. To get started, suggest you to run YOLOv3 to get familiar with YOLOv3 network which is already available in the triton apps.
For example,

The YOLOv4 postprocessing can be found in

thanks Morgan.
If I prefer the Detectnet_v2 based model (version 1.0) of the LPDNet, what do I need to change? Could you provide some hint.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

If using version 1.0, since it is based on detectnet_v2 network, so there is not much change needed. You can refer to DashCamNet or Peoplenet in GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton as the examples. They are both based on detectnet_v2 network.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.