Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc)
Ubuntu20, x86, RTX3090
• Network Type
Yolo_v4
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
There’s a Nvidia pretrained model lpdNet for detect car plate, I’m wondering how to directly download and use the Version 2.0 models based on YoloV4-Tiny with FP16 accuracy of this model to start the triton server infer?
This is what I understand:
Convert the .etlt to .plan
by modifying download_and_convert.sh of adding yolov4_tiny_ccpd_deployable.etlt convert. Could you provide a sample?
Adding PostProcess
All object detection model need add custom postprogrss, is that correct? Could you provide a sample?
The LPDNet version 2.0 model is based on YOLOv4_tiny network. It is also very similar to YOLOv4 network. So, your work is to implement YOLOv4 network in triton by yourself. To get started, suggest you to run YOLOv3 to get familiar with YOLOv3 network which is already available in the triton apps.
For example,
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks