for the unsupported layer in tensorRT, what should i do

if tensorRT3 does not support some layers,for example dropout layer, what should i do

Hi 742824147, have you come across the TensorRT plugin API? It is for implementing custom layers (i.e. in a CUDA kernel)

See AastaNV’s TensorRT plugin sample here:

@dusty_nv thanks for your reply, my network is implemented with tensorflow, can the TensorRT plugin API work?


YES. You can implement a plugin layer for TensorFlow model.
Flow is to convert the model to UFF format and implement the non-supported layer via Plugin API.

But generally, drop out layer is turning-off when inferencing. You should be able just to remove it.


thanks, but how to exactly remove drop out layer when inferencing


Just delete it. For example:


fc1 = tf.layers.dense(fc1, 1024)
fc1 = tf.layers.dropout(fc1, rate=0.75)
out = tf.layers.dense(fc1, n_classes)


fc1 = tf.layers.dense(fc1, 1024)
out = tf.layers.dense(fc1, n_classes)



What is exactly the workflow for this one? Could you please describe?
I am a realy new in this area, I just want to try it out with a realy basic CNN network with dropout.

I don’t understand how can I train my net with dropout, and then delete the dropout layers (which won’t change my graph structure without rebuild) and then freeze my changed graph with the trained weights etc…

So how can I train with dropout, delete the layers, and freeze my graph?

Thanks for your help!


i have a question about the workflow of adding unsupported operation of tensorrt .

  1. convert the tensorflow model to uff file but with message shows: No conversion function registered for layer: abc yet. Converting abc as custom op: abc
    (this means there is already an abc in the uff file but i have to implement a plugin of tensorrt names “abc” right ?)

  2. implement a “abc” plugin of tensorrt in c++ program and use it to build a network with uff file

Is anything wrong above ?

Thanks for any help



The warning is because uff parser cannot know if there is a plugin or not.

AastaLLL, Thanks for your answer.

I am still not so clear about the whole process of create, import custom plugin; i have read “uff_custom_plugin” tensorrt sample, the flow of create/import custom plugin into UFF file when convert graph from tensorflow to tensorrt seems like as below (please correct me if i am wrong)

  1. write cpp program for custom layer(or operation ?) and compile it to shared library

    Question: the arguments of constructor of custom layer plugin must be same as graphsurgeon.create_plugin_node ?

  2. we must prepare when we convert tensorflow pb file to tensorrt uff file, after that we can use this uff file only in c/c++ program for inference

    can we just convert pb to uff without import which is no supported by tensorrt then use this uff in c/c++ program which implements custom layer plugin ?



The whole pipeline should like this: .pb -> .uff -> TensorRT engine
We also have a sample in python to demonstrate this:

This is a two-step process:

1. Convert the .pb file into uff
The plugin should be specified like this:

concat_box_conf = gs.create_plugin_node(
namespace_plugin_map = {
    "concat_1": concat_box_conf

concat_1: TensorFlow operation name
concat_box_conf: name for mapping
FlattenConcat_TRT: plugin layer name

2. The libraries is only used when creating TensorRT engine.
The uff file can be generated without the .so beforehand.