After some tweaking and headbanging I was able to get the DS resnet10.caffemodel to run in DIGITS.
I combined the Dectnet model with the deploy prototxt of the DS resnet10.caffemodel and used the weights of the DS resnet10.caffemodel to train with. I also used the KITTI data base as my training database.I am putting it out their because of a issue I have come up against that I hope somebody with a better knowledge of DIGITS can figure out.
Here is a link to my github repo with the DS resnet10.caffemodel that was produced by DIGITS:
Here is a link to a you tube video with a explanation on how to run the DS resnet10.caffemodel in DIGITS:
Is DIGITs essential for you?
If not,it’s recommended to use our transfer learning toolkit, which targets for model re-training.
Please check this introduction page for more details:
I have tried the TLT and the models do not even come close to the performance of the DS models.
I need the performance of the DS resnet10.caffemodel for the NANO project I am working on.
Im kind of bummed that NVIDA would show us the amazing performance of the NANO with the DS models
yet not show us how to achieve the same result with our own datasets.
I can get the nano to run 6 cameras and stream the results at 26 FPS with the DS resnet10.caffemodel:
The TLT models cant even come close to that performance.
Which model do you use?
TLT support ResNet10 detection model which is similar to the Deepstream Primary.
I tried TLT when it first came out.
Has it been updated to get the TLT ResNet10 model to perform at the same level as the DS model?
So you are saying that the TLT Resnet10 model can perform at the same speed and accuracy with multiple camera inputs as the DS Resnet10.caffemodel can?
Which model do you try when using the TLT?
The Resnet-10 model is in the same architecture as the primary detector in Deepstream.
So it should give you the similar performance.
Have you run the TLT Detecnet_V2resnet10 model against the DS resnet10.caffemodel?
The performance of TLT Detecnet_V2resnet10 model dosent come close.
I am I doing something wrong?
You will need to apply pruning to get the similar performance.
The default model won’t give good performance as the primary detector in Deepstream.
I will give TLT another try.