Cannot parse custom layer by onnx tensorrt

Description

I cannot parse custom layer by onnx tensorrt.

Environment

TensorRT Version: 7.1.3
GPU Type: TITAN RTX
Nvidia Driver Version: 440.100
CUDA Version: 10.2
CUDNN Version: 8.0.0
Operating System + Version: Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Steps To Reproduce

  • add “REGISTER_TENSORRT_PLUGIN(MishPluginCreator);” into mish.cu in the repository above.

  • build this package and create libmyplugins.so.

  • Create onnx like below
    import onnx_graphsurgeon as gs
    import onnx
    import numpy as np
    shape = (1, 3, 224, 224)
    x0 = gs.Variable(name=“x0”, dtype=np.float32, shape=shape)
    x1 = gs.Variable(name=“x1”, dtype=np.float32, shape=shape)
    attrs = {}
    attrs[“plugin_namespace”] = “”
    attrs[“plugin_version”] = “1”
    nodes = [gs.Node(op=“Mish_TRT”, inputs=[x0], outputs=[x1], attrs=attrs)]
    graph = gs.Graph(nodes=nodes, inputs=[x0], outputs=[x1])
    onnx.save(gs.export_onnx(graph), “model.onnx”)

  • ./trtexec --onnx=model.onnx --plugins=libmyplugins.so

The output was “No importer registered for op: Mish_TRT”.

Hi @daisuke.nishimatsu,
The issue might be with the registration of plugin.
However can you please share your onnx model so that we can help you better.
Thanks!

Hi @AakankshaS,
Here is my onnx. Please check.

Thanks in advance.

I’m able to implement “Mish” in TensorRT with "Softplus” + “Tanh” + “Mul” ops. You could check out my blog post and source code if interested.

1 Like

@jkjung13 Thanks for your advice. But I think original implementation of “Mish” is not completely equivalent to "Softplus” + “Tanh” + “Mul”.

@AakankshaS Any updates on this?
I still have to change onnx-tensorrt?
https://github.com/NVIDIA/TensorRT/issues/6#issuecomment-650687459

@daisuke.nishimatsu This is good information. Thanks for pointing it out.

@jkjung13 Thanks for your advice. But I think original implementation of “Mish” is not completely equivalent to "Softplus” + “Tanh” + “Mul”.

I have verified the mean average precision (mAP) of my TensorRT YOLOv4 implementations with "Softplus” + “Tanh” + “Mul”. The mAP numbers looked good, though.

Besides, the wang-xinyu/tensorrtx repository also implemented “Mish” as “x * tanh_activate_kernel(softplus_kernelx))”: https://github.com/wang-xinyu/tensorrtx/blob/master/yolov4/mish.cu#L135

Anyway, I will look into it.

@daisuke.nishimatsu I’ve verified that the “mish_yashas2()” function is exactly the same as the original Mish function. See below.

float e = __expf(x);
float n = e * e + 2 * e;

That is,

image

And,

image

So the “mish_yashas2()” function is calculating the exact same value. The “mish_yashas2()” function has the benefit of using less exp() and log() computations which might also help numerical stability, I guess.

I have updated my tensorrt yolov4 implementation with a “yolo_layer” plugin. As a result, the FPS numbers improved quite a bit. Refer to my jkjung-avt/tensorrt_demos repository for details.