TensorRT / Custom Ops

Hi all,

I am currently required to port a TensorFlow graph from a PC platform onto the Tegra SoCs. Currently I’m evaluating using Jetson TX1 with the deep learning SDKs (particularly TensorRT 3).

Just a quick question. I have several custom operations embedded in a TensorFlow graph which I would be required to port across to the Nvidia SDKs. Does TensorRT support custom ops compiled as a library directly?

Thanks!
Paul

Hi,

Is your custom operation a weighted layer?

There are two steps on your use-case: 1). Parse model to uff format and 2). Run uff model with TRT.

  1. parser TF->UFF:
    Please check this document for supported layers:
    https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/3.0/rc1TensorRT3-Release-Notes-RC-pdf

  2. Run uff model with TRT:
    TRT can handle custom layer. You can implement the custom operation with plugin API.

In summary
If your custom operation is weighted-free, it should be workable.

Remove the custom layer → parser model → add the custom layer back via Plugin API

If your custom operation is weighted, please check if your model can be parsed into uff format first.

Thanks.

Thanks, AastaLLL, “Plugin API” was the magic phrase I was searching for! I love it when a company does things nicely in their SDKs! The plugin API fits my needs perfectly.

Thanks!