Tensor RT supports caffe model layers

Hi,

Thanks for your question.

TensorRT supports following layer type:

Convolution: 2D
Activation: ReLU, tanh and sigmoid
Pooling: max and average
ElementWise: sum, product or max of two tensors
LRN: cross-channel only
Fully-connected: with or without bias
SoftMax: cross-channel only
Deconvolution

Read more at: https://devblogs.nvidia.com/parallelforall/production-deep-learning-nvidia-gpu-inference-engine/

TensorRT does not support custom layers. However, you can add your own layer in the TensorRT flow.
For example

IExecutionContext *contextA =
engineA->createExecutionContext();
IExecutionContext *contextB =
engineB->createExecutionContext();
<...>
contextA.enqueue(batchSize, buffersA, stream, nullptr);
myLayer(outputFromA, inputToB, stream);
contextB.enqueue(batchSize, buffersB, stream, nullptr);