TensorRT-4.0.1.6: How to add a Constant Layer

I am working in Python 2.7 and TensorRT-4.0.1.6, trying to implement the following operation:

x^T w

, where x is a number of concatenated vectors and w is a weight vector that I am loading from a saved tensorflow .pb file. I am trying to load the weights vector into a constant layer and use the add_matrix_multiply method to compute the transpose:

numeric_data = network.add_input("deep_numeric_input", trt.infer.DataType.FLOAT, (100, 1, 1))
        embed_data = network.add_input("embed", trt.infer.DataType.FLOAT, (100, 1, 1))
        full_concat_emb = network.add_concatenation(list([numeric_data, embed_data]))

        w = tensor_util.MakeNdarray(graph_name2node["weights/Variable"].attr['value'].tensor)
        weights_constant = network.add_constant([1] * 200, w)
        weighted_embed = network.add_matrix_multiply(full_concat_emb.get_output(0), True, weights_constant.get_output(0), False)

I am getting this error: TypeError: in method ‘NetworkDefinition_add_constant’, argument 2 of type ‘nvinfer1::Dims’.

It seems like I am not using the method add_constant correctly, but I am not sure how to construct this ‘nvinfer1::Dims’ to specify the dimensions of this constant layer?

Hi,

It seems the Dims argument parameter type is incorrect in network.add_constant() function call.
Please refer below sample for your reference.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#ewlsetup

Also, TRT 4 has been deprecated for some time now. It is recommended to migrate to latest TRT version.

Thanks