Do Inference Server support Caffe and Caffe custom layer?

Hi,

I am wondering that do Inference Server support Caffe and Caffe custom layer?

If there has any examples?

The inference server does not support Caffe directly. You will need to convert your caffe model to ONNX or TensorRT and do the same for your custom layers. You will need to look at the TensorRT documentation for instructions on how to do this.

The inference server does support Caffe2 so perhaps there is a path from Caffe -> Caffe2?

Hi, David,

OK. Thank you for your answer.

Still, do inference server support Caffe2 custom layer?

Cause I refered to the URL as following (https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/custom_operation.html), it didn’t mention Caffe2 in custom operation section.

No, currently the inference server doesn’t have any support for Caffe2 custom layers. You may be able to use something like LD_PRELOAD or build the custom layers into your own version of TRTIS.