Pytorch tensor.size() to TensorRT6.0 failure

I am using symbolic method to register my op (subclass of autograd.funciton) following torch.onnx — PyTorch 1.12 documentation. It is not an Aten operator. Instead of manipulating the ONNX op or ONNX graph to by-pass the problem, are there anything to do with the Gather operation itself?
In TensorRT7, the onnx-tensorrt define Gather like this:

DEFINE_BUILTIN_OP_IMPORTER(Gather)
{
    nvinfer1::ITensor& data = convertToTensor(inputs.at(0), ctx);
    // TRT does not support BOOL input types for this node
    ASSERT(data.getType() != nvinfer1::DataType::kBOOL, ErrorCode::kUNSUPPORTED_NODE);
    nvinfer1::ITensor& indices = convertToTensor(inputs.at(1), ctx);
    OnnxAttrs attrs(node, ctx);
    int axis = attrs.get<int>("axis", 0);
    int nbDims = inputs.at(0).shape().nbDims;
    TRT_CHECK(convertAxis(axis, nbDims));
    LOG_VERBOSE("Using Gather axis: " << axis);
    RETURN_FIRST_OUTPUT(ctx->network()->addGather(data, indices, axis));
}

There is no such assertion on the shape tensor input and there is no documentation to illustrate what is the difference of Gather between TensorRT6.0 and TensorRT7.0.
Thanks for your attention.