TensorRT 3.0 Python API addConcatenation function need a ITensor *const * parameter which cannot be ...

Hi,

Please look into the Python API function in class tensorrt.infer.NetworkDefinition:

addConcatenation(ITensor *const *inputs, int nbInputs)=0 -> IConcatenationLayer *

The ITensor *const *inputs is C pointer, but in Python I cannot create a similar thing either using tuples or list. Is this not wrapped in SWIG by mistake?

Thank you!

Hi,

You have created a similar topic in DeepStream for Tesla:
https://devtalk.nvidia.com/default/topic/1028648/deepstream-for-tesla/bug-tensorrt-3-0-python-api-did-not-change-variable-ownership-to-c-in-add_convolution-add_scale-etc-/

Is this still an issue for you?

Thanks.

Hi,

Thank you for your reply. They are two different issues. Here, I am wondering how I can pass the first argument for addConcatenation in Python, which is a ITensor *const *inputs type.

For example, I want to concatenate the outputs of Layer_1 and Layer_2, but by Layer_1.get_output(0) and Layer_2.get_output(0) I can only get two ITensor * variables.

Thanks.

Hi,

Please try this:

concat_layer = network.add_concatenation([conv0.get_output(0), conv1.get_output(0)])

Thanks.

Hi,

It did not work and I found a same issue here, which has been accepted as a BUG:
https://devtalk.nvidia.com/default/topic/1025574/gpu-accelerated-libraries/using-concatenation-layer-in-tensorrt-3-rc-python-api-/

Thank you!

Hi,

Sorry for the missing.

This is a fixed issue and the update will be available in the future release.
Please wait for our announcement.

Thanks.

Any progress on this issue?