Custom Layer Implementation - TensorRT - Tensorflow model


Consider that I have a model with all the layers that are supported by TensorRT. The workflow for deploying this model on the DrivePX2 is

  1. Tensorflow model -> UFF model on Host
  2. Copy UFF file to PX2
  3. UFF model -> tensorRT engine on DPX2
  4. Load tensorRT engine using C++ API on DPX2 for inferencing

This is according to
and the sampleUFFMnist TensorRT sample

I now want to introduce dropout layers to this model. The dropout layers will be used during inference and this layer is currently unsupported by TensorRT.

What would be my workflow for the same?


Dear serinvarghese,
Can you check the below steps.

  • Implement the custom layer plugin in C++ as shown in samplePlugin
  • Use createUffParser to parse UFF file as shown in sampleUffMNSIT
  • Use parser->setPluginFactory() for custom layers as shown in samplePlugin

There isnt a a function to use Plugins with C++ API with TRT 3.0.2(on PX2), has there been an update after this that allows the same

Dear Dhingratul,
I did not get your question. If you are asking about using plugin layer in tensorRT, Can you please check samplePlugin sample.

I am looking for a “samplePlugin” equivalent for UFF based models. In one of the posts, it is mentioned that there would be a SSD sample with UFF in TRT 4 GA, I am asking if that is available yet ?

Yes. SSD sample with UFF is available in TRT 4 GA. Please check

Yes, I did and I have a few bugs with it and TRT 4GA that are still unresolved, maybe you can help expedite the resolution for that.

Hi, Dhingratul

Okay. Let’s track that in the original topic.

@AastaLLL any progress on either of the bugs ?

Please check the original topic: