How to parse the trained zf caffe model in tensorRT?

I created some object detection application with zf caffe model. I have FRCNN.caffemodel, test.prototxt, voc_config.json. I want to speed up in jetson nano by converting it to tensorRT.
I’ve found many source which converts the caffe model to tensorRT. But I can’t find the source or doc for converting zf caffe model to tensorRT. Can you help me?

Hi,

I think ZF Caffe model should work similar to caffe model to TRT optimization, if model operations are supported by Caffe parser in TRT.

Could you please try to convert your model using caffe parser?

You can also try to use “trtexec” command to test the model.
“trtexec” useful for benchmarking networks and would be faster and easier to debug the issue.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks

Yes, I tried to convert using caffe parser. But it can’t parse the layer of “Frcnn proposal” type and other type of layers.

Hi,

Please refer to below link for supported ops in Caffe parser:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-support-matrix/index.html#supported-ops

You need to create a custom plugin for the layers which are currently not supported.

Thanks

Thanks your replying!