About use int8 model!

I have some custom layers in my net. The tensorrt will be crash when I calibrate. So I have a question, whether or not the custom layer cann’t be calibrate.
minist_pyramid.zip (3.24 MB)
error.zip (61.4 KB)

To help us debug, can you provide a small repro package containing your source, network containing the custom layers that exhibit the crash symptoms when you calibrate.

Do you have examples that use custom layers and int8 mode?

The attatch is our model, please read readme.txt first and test it with TensorRT-\TensorRT-\samples\sampleINT8, the sampleINT8 will be crash.


I’m not seeing any errors when running your repro. What type of GPU are you running? and when you say “sampleINT8 will crash”, can you describe the error, seg fault, or trace back you are seeing?

NVIDIA Enterprise Support


my environment:
windows version:win10
GPU type :1080
nvidia driver version:411.31
CUDA version:10.0
CUDNN version:
TensorRT version:

When I use “mnist_pyramid.prototxt” and “mnist_pyramid.prototxt”,the sampleINT8 will crash.The error.jpg and crash.jpg that in the new attatchment are the screenshot when sampleINT8 crashed.

Another question, Can you provide the examples that use custom layers and int8 mode for us?



Can you give me a answer about my question?


We are reviewing this issue. Will keep you updated.


Regarding custom layers, currently we do not support custom layers with INT8, it is possible to run a network with custom layers in INT8 though. However, the custom layers will not be run in INT8.


Regarding the error you are seeing: Per engineering, if the tensor values are zero then the calibration can fail. This is something we hope to address in the future.

We would not expect the tensor distribution to be all zeros unless we have activations in network forcing the output tensor to be all zero. For eg. relu with all negative input tensor will result in all zero output tensor.


Thanks for your replay!