Cannot parse custom layer by onnx tensorrt


I cannot parse custom layer by onnx tensorrt.


TensorRT Version: 7.1.3
Nvidia Driver Version: 440.100
CUDA Version: 10.2
CUDNN Version: 8.0.0
Operating System + Version: Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Steps To Reproduce

  • add “REGISTER_TENSORRT_PLUGIN(MishPluginCreator);” into in the repository above.

  • build this package and create

  • Create onnx like below
    import onnx_graphsurgeon as gs
    import onnx
    import numpy as np
    shape = (1, 3, 224, 224)
    x0 = gs.Variable(name=“x0”, dtype=np.float32, shape=shape)
    x1 = gs.Variable(name=“x1”, dtype=np.float32, shape=shape)
    attrs = {}
    attrs[“plugin_namespace”] = “”
    attrs[“plugin_version”] = “1”
    nodes = [gs.Node(op=“Mish_TRT”, inputs=[x0], outputs=[x1], attrs=attrs)]
    graph = gs.Graph(nodes=nodes, inputs=[x0], outputs=[x1]), “model.onnx”)

  • ./trtexec --onnx=model.onnx

The output was “No importer registered for op: Mish_TRT”.

Hi @daisuke.nishimatsu,
The issue might be with the registration of plugin.
However can you please share your onnx model so that we can help you better.

Hi @AakankshaS,
Here is my onnx. Please check.

Thanks in advance.

I’m able to implement “Mish” in TensorRT with "Softplus” + “Tanh” + “Mul” ops. You could check out my blog post and source code if interested.

1 Like

@jkjung13 Thanks for your advice. But I think original implementation of “Mish” is not completely equivalent to "Softplus” + “Tanh” + “Mul”.

@AakankshaS Any updates on this?
I still have to change onnx-tensorrt?

@daisuke.nishimatsu This is good information. Thanks for pointing it out.

@jkjung13 Thanks for your advice. But I think original implementation of “Mish” is not completely equivalent to "Softplus” + “Tanh” + “Mul”.

I have verified the mean average precision (mAP) of my TensorRT YOLOv4 implementations with "Softplus” + “Tanh” + “Mul”. The mAP numbers looked good, though.

Besides, the wang-xinyu/tensorrtx repository also implemented “Mish” as “x * tanh_activate_kernel(softplus_kernelx))”:

Anyway, I will look into it.

@daisuke.nishimatsu I’ve verified that the “mish_yashas2()” function is exactly the same as the original Mish function. See below.

float e = __expf(x);
float n = e * e + 2 * e;

That is,




So the “mish_yashas2()” function is calculating the exact same value. The “mish_yashas2()” function has the benefit of using less exp() and log() computations which might also help numerical stability, I guess.