A simple example on the custom layer API (TensorRT 2.1)?

Hi,

Therefore, if my net with custom layer, tensorRT will enqueue all my layers

with FP32 automatically even though I set FP16 or Int8?

Hi,

Yes, there is an automatically added conversion before and after plugin layer.
fp32> fp16>

By the way, INT8 is not available on TX2.
Thanks.

I’m implementing a pooling layer which also outputs the pooling mask using IPlugin. Although custom layers cannot be converted to int8, since max pooling is a very simple operation and it works with int8 without calibration, is there a way to use this custom layer with int8 precision?

Hi, nvts

TX2 doesn’t support the INT8 feature. And plugin API does not support INT8.
Thanks.

Hi AastaLLL! I’m not using a TX2, I’m working with a Drive PX 2. By “using a custom layer with int8 precision” I mean using the plugin after the calibration of the standard layers is done. The idea is to implement the custom layers with float32 precision and then after doing the calibration of the entire network to substitute the float32 plugins with int8 plugins since max Pooling is a very simple operation. Is there a way to do it?
Thank you

Hi,

It’s recommended to file a new topic on the correct platform board to have better support.
Thanks.

Hi,

According to the Face-Recognition sample, I add the layer by plugin method in my codes. But I find the deploy.prototxt file has been added some layers (type “IPlugin”) .
In fact , the TensorRT-Developer-Guide file has no description about it.
How to add IPlugin layer to deploy.prototxt file ?

Hi,

It’s recommended to follow Face-Recognition sample for plugin layer implementation.

Parser will call function in PluginFactory if a layer type is “IPlugin”.
You can find more information in our document:
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#plugin_sample[/url]

Thanks.