For our use case, we use a modified yolov3 architecture, where the number of filters used, differs from the standard v3 architecture. Would it be possible to train such an architecture in the TLT pipeline? And use the associated features.
The need for reducing the architecture is limited resource availability
We have reduced the filters at most layers to make a reduced yolov3 architecture, with inference times similar to tiny-yolov3 and higher accuracy. Pruning may help, but I wasn’t clear on how extensively it can work. I am experimenting with existing TLT to see if pruning can work but the freedom to give the network architecture itself as input / or as a modification is what I am looking for.