What is the architecture being used in PeopleNet pruned resnet34/resnet18 model?

I understand its a hyperparameter and needs to be chosen by user. Let me put this way, the accuracy and performance of the model depends on its number of operations(directly proportional to the architecture of pruned model)

Nvidia is able to achieve 10fps Jetson Nano with some pruning threshold chosen and accuracy of 83% on Nvidia’s Internal dataset.

I am trying to match up with the Nvidia’s performance and accuracy. In one of the questions( Accelerating Peoplnet with tlt for jetson nano - #13 by Morganh), you gave instructions to achieve 10 fps on Nano with pth=0.005. I tried to reproduce the same architecture, but got only 7.5 fps(with jetson clocks and performance mode enabled).

I have one more query, Is it possible to transfer the pruned weights to other framework like keras for training?