TAO LPDnet Yolov4 model instead of YOLOv4-tiny?


Would it be possible for you to release a YoloV4 US model of LPDNet instead of the YoloV4-tiny model which hopefully improves accuracy?


@Morganh I think it is a request for TAO.

Moving to TAO forum.

Actually the YOLOv4_tiny model already has a high accuracy(99.53%) when run inference against internal NVIDIA 3k LPD eval dataset. See more in LPDNet | NVIDIA NGC .
You can have a try.
And also the YOLOv4_tiny pruned model should have a higher fps than YOLOv4’s.

I see now, and the work I’m doing is not in real time so the real-time inferencing performance isn’t of much importance.

It would be nice to have the yolov4 model instead of the tiny model, but either way it looks like I’ll want to do some re-training on my own.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.

Yes, end users can use the trainable tlt model to do training against their own dataset.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.