I meet the same error.In my project,if I use only plugin for deformable convolution layer,it can get correct result, but when I run centernet,I get the same error.
I can share my onnx file and plugin code for you.
TensorRT部署深度学习模型 - 知乎, if you read Chinese, it says it is caused by Slice layer in TensorRT. I have not confirm it yet.
It is very strange that when I build a dummy model which contain only a few conv layers with a modulated deform conv inserted in the middle, I can build the model and do inference. But when I do the same procedures with centernet, it failed with the same assertion mentioned above.
BTW, have you tried a simple model with modulated deform conv before?
Currently I have three engines because of some strange behavior of TensorRT for CenterNet to speed up inference and got 30fps. But I want to have one engine.
Hi,
When I use a small model to debug my modulated deform conv plugin, it works fine. But when I apply the layer to centernet it shows the same error message as stated above. How can I give you my code and centernet onnx model to you to reproduce the error?
Thanks.
P.S. @cheivan I would like to share it with you as well for discussion.
@1051323399 Could you please provide your updates in CMakeList.txt files as references for us so that we can compile and run your plugin with pytorch dependencies?
@xmpeng A macro named CHECK_PLUGIN_STATUS is referred multiple times but its definition is missing in the plugin source code.
Please provide us its definition so that we can continue.
Thank you.
Sorry for late response. We have verified both TensorRT6.0 and TensorRT7.0 with your plugin, and there are some conclusions from us:
This problem does not seem to be related to your plugin (ModulatedDeformConv).
This is a graph building error. There are structures such as parallel conv layers followed by split operations that TensorRT6.0 cannot handle properly.
TensorRT7.0 can handle this kind of structures without errors.
So, we recommend you to upgrade to TensorRT7.0 if possible.
Hi @ersheng , I’m meeting the same problem and could you please elaborate on why the current plugin does not work with TRT6? Is it due to Split layer, or due to running parallel convolutions before it? Would there be any other work around (for example writing own Split custom layer)?
Due to our deployment environment, only trt <= 6 could be used. Your help is much appreciated.