Changes needed to convert custom yolov3 architecture with TLT 3

For our use case, we use a modified yolov3 architecture, where the number of filters used, differs from the standard v3 architecture. Would it be possible to train such an architecture in the TLT pipeline? And use the associated features.

The need for reducing the architecture is limited resource availability

Currently TLT/TAO can provide some configurations before training. See YOLOv3 — TAO Toolkit 3.22.05 documentation

It has not been allowed to config/train with a custom yolov3 architecture out of range.

Yes, I have gone through this documentation
Thanks for clarifying

So is it planned in future scope?

Not sure. I will sync with internal team.

Would appreciate any feedback. Thank you

For “where the number of filters used”, can pruning work for you? See YOLOv3 - NVIDIA Docs

1 Like

We have reduced the filters at most layers to make a reduced yolov3 architecture, with inference times similar to tiny-yolov3 and higher accuracy. Pruning may help, but I wasn’t clear on how extensively it can work. I am experimenting with existing TLT to see if pruning can work but the freedom to give the network architecture itself as input / or as a modification is what I am looking for.

The freedom custom architecture is not supported.
For your case, yolo_v4 tiny will be supported in next release.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.