Jetpack4.4 TensooRT7.1 error: ‘createConcatPlugin’ is not a member of ‘nvinfer1::plugin’

Hi, TX2 (Jetpack4.2.3 TensorRT5.1) was used in the previous project, because the network model has a custom layer, and the TensorRT API ‘createConcatPlugin’ conversion is used, which can normally achieve model acceleration and inference.

Now testing on AGX (Jetpack4.4 TensorRT7.1), an error message appears when compiling the project, ‘createConcatPlugin’ is not a member of’nvinfer1::plugin’. Compare the API interfaces of TensorRT5.1 and TensorRT7.1 and find TensorRT7.1 The interface has changed a lot.

Is there any other interface for TensorRT7.1 that can replace ‘createConcatPlugin’?
Or other methods can be solved, thank you very much.

Hi,

Based on the comment in NvInferPlugin.h:
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-515/tensorrt-api/c_api/_nv_infer_plugin_8h.html

The Concat plugin layer basically performs the concatention for 4D tensors. Unlike the Concatenation layer in early version of TensorRT, it allows the user to specify the axis along which to concatenate. The axis can be 1 (across channel), 2 (across H), or 3 (across W). More particularly, this Concat plugin layer also implements the “ignoring the batch dimension” switch. If turned on, all the input tensors will be treated as if their batch sizes were 1.

You can use standard IConcatenationLayer to replace this operation directly.
The particular concat axis can be assigned via setAxis():
https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_concatenation_layer.html

Thanks.

/home/zhangzhiyong/TensorRT-7.1.3.4/samples/python/yolov3_onnx/TensorRT-YOLOv4-master/onnx-tensorrt/builtin_plugins.cpp:70:57: 错误:无法为有抽象类型‘nvinfer1::IConcatenationLayer’的对象分配内存

how to solve it?

Hi 649431508,

Please help to open a new topic for your issue. Thanks