Transfer Learning toolkit models vs Deepstream models on the Nano

Has anybody been able to get the Transfer Learning Toolkit models to run at the same performance level as the Deepstream models on the Nano?

Hi adventuredaisy,
It is an unknown result yet.
I want to mention that it is not an “apple to apple” comparison between tlt pre-trained resnet10 model(at ngc.nvidia.com) and resnet10.caffemodel inside deepstream.
The tlt pre-trained model is just pre-trained weight for end user to train their own data.
End user may use it or may not use it. If not use it, just keep corresponding setting unfilled in training spec.

The resnet10 caffe model is not trained with TLT. TLT is designed much later than DS.
The only thing they(tlt vs. DS_caffe model) have is just the detectnet_v2 network. Because DS caffe model is trained with a network similar to detectnet_v2. And detectnet_v2 network is available in TLT.

I have tried the TLT and it just cant give me the performance I need.
I need the performance of the DS resnet10 model. If you guys can say that the TLT resnet 10 model with enough tweaking and training will give me the same performance as the DS resnet10 caffemodel I will stop bugging you guys and get back to trying to figure it out.

Cant you just show us how you trained the DS resnet10 caffe model so we can do it with our own data.

I really am pumped up about the performance of the Nano for my projects;

Here is some things I have been up to with the Nano:
github repo:https://github.com/valdivj

youtube:https://www.youtube.com/channel/UCOOidQm3y9w22O9n8yo30yQ?view_as=subscriber

you guys even have one of my tutorials on the Jetson Nano Developers homepage.
" Jetson Nano Kinect2"
By Joev Valdivia

I need the TLT models to be able to do this and I haven’t been able to come close.

https://youtu.be/IOLfrbDwDXc

Hi adventuredaisy,
I will check more info internally and give you feedback if there is any finding.

Hi Morganh,

https://devtalk.nvidia.com/default/topic/1052332/jetson-nano/deepstream-on-jetson-nano-object-detection/

In above mentioned url, they said that DS resnet-10 model was trained on Transfer learning toolkit. There is no proper word from nvidia team regarding how they trained that model.

Hi sathiez,
DS resnet-10 model is a caffe model which is apparently not trained with TLT. But I’m afraid your mentioned link is talking about prune. Indeed, Deepsteam’s resnet10 caffe model is pruned. You can see its size is small. For prune, TLT can provide the tlt-prune function.

It is necessary for end-user to prune the trained tlt model and retrain.

Hi Morganh,

As you said DS resnet-10 caffe model was pruned using TLT, so can we prune caffe model with prototxt in TLT?

No, I don’t mean DS resnet-10 caffe model was pruned using TLT. I mean the DS resnet-10 caffe model is generated via pruning strategy.
In TLT, it can provide tlt-prune as a kind of pruning strategy.
TLT can only prune the tlt model.

See below topic shared by adventuredaisy for his success.
https://devtalk.nvidia.com/default/topic/1068429/transfer-learning-toolkit/jetson-nano-running-both-the-ds-resnet10-and-the-tlt-resnet10-comparison/

Tlt-prune plays an important role in the size. End user can prune the tlt model to be as small as the model given in DS sample, and then re-train to check mAP.
Get a final best combination of mAP and size.