Any caffe trained models are compatible with sampleSSD model used in TensorRT?

Hi,

I have used smapleSSD to run inference.

The example uses VGG_16 based caffe model and deploy proto.txt.
I was successful with the inference.

Now I would like to train the model with my own data set.

  1. How to do it using VGG based Caffe??

  2. Any repo or guide for training the model.

  3. Also, all Caffe models trained using Mobilenet, vgg, resnet, detect net or any other base network are compatible with the sampleSSD example??

  4. Can I just train any Caffe model and run inference using Sample SSD??

  5. Or it supports only VGG based Caffe mode??

  6. If yes, what changes needs to be done to train other models in Caffe and run inference using sampleSSD.

Please let me know thes…

Thank you

Hi,
To train your model, you can follow the links below:
https://docs.nvidia.com/deeplearning/digits/digits-getting-started-caffe/index.html
https://docs.nvidia.com/deeplearning/digits/digits-tutorial/index.html

SampleSSD is tested using VGG-16, you may have to make some modifications based on your model. Please check the link which can be useful :
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-710-ea/developer-guide/index.html#add_custlay_c_no_caffe

Also check the sampleMNIST link for reference :

You can use trtexec command to test your caffe model.

Thanks!

How can use trtexec command to test my model??

I want to train caffe model using vgg16 or inception net.
is it possible to convert it to tenssorRT.

I know it works well with vgg 16 net.
but is it gonna work with inception net as well??

Could you recommend any other repo for training.

In digits, does it support caffe models using vgg16???

Hi @god_ra, Please find the link below. This might resolve your queries.
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html

You can try out the samples catering different asks.

Thanks!

You did not really answered my question.

I have idea about those samples.

I just need to know whether I Can use caffe trained inception model to run inference using sampleSSD repo.

What changes do I need to do??

And could you provide me a link to train own dataset in caffe Vgg 16 net. later I can run inference in sample SSD.

Hi, you may do the custom model training using DIGIT.
For further queries/issues related to it, you may raise your concern in DIGIT forum.
Thanks!

Okay.

If I do custom Object detection training there using Vgg16 model (deploy.prototxt and .caffemodel file)

Would it be possible to run inference using sampleSSD??

Or will there be a problem with some laters and other stuffs?? Apart from Flatten layer Replacement with Reshape (this I would anyway do it during inference)

Yes this is possible, however it may happen that your model may not support the additional/custom layers. To resolve that, you may have to add custom plugin. Please use the link for your reference.

Also you can check the supported operations in Caffe here -
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-713/support-matrix/index.html#supported-ops

Thanks!

I used the MobileNet SSD model to check the detection. I used corresponding deploy.prototext and model.
Every thing went smoothly but did not get any detections.

here are the files I used: .caffemodel file

deploy.prototxt

I changed Flatten to Reshape and added keep_count.

But Not able to get detections.

What could be the problem??