Unsupported opeation _Tile

When i use UFFParser,something wrong:ERROR: UFFParser: Validator error: anchors7/Tile: Unsupported operation _Tile

Device: JETSON TX2
Version: Tensorrt 4
DeepLearning Framework: Keras SSD7

how can i fix the problem?

Warning: No conversion function registered for layer: Tile yet.
Converting as custom op Tile anchors7/Tile
name: “anchors7/Tile”
op: “Tile”
input: “anchors7/Const”
input: “anchors7/Tile/multiples”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
attr {
key: “Tmultiples”
value {
type: DT_INT32
}
}

Warning: No conversion function registered for layer: Elu yet.
Converting as custom op Elu elu7/Elu
name: “elu7/Elu”
op: “Elu”
input: “bn7/FusedBatchNorm_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Warning: No conversion function registered for layer: Elu yet.
Converting as custom op Elu elu6/Elu
name: “elu6/Elu”
op: “Elu”
input: “bn6/FusedBatchNorm_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Warning: No conversion function registered for layer: Elu yet.
Converting as custom op Elu elu5/Elu
name: “elu5/Elu”
op: “Elu”
input: “bn5/FusedBatchNorm_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Warning: No conversion function registered for layer: Elu yet.
Converting as custom op Elu elu4/Elu
name: “elu4/Elu”
op: “Elu”
input: “bn4/FusedBatchNorm_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Warning: No conversion function registered for layer: Elu yet.
Converting as custom op Elu elu3/Elu
name: “elu3/Elu”
op: “Elu”
input: “bn3/FusedBatchNorm_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Warning: No conversion function registered for layer: Elu yet.
Converting as custom op Elu elu2/Elu
name: “elu2/Elu”
op: “Elu”
input: “bn2/FusedBatchNorm_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Warning: No conversion function registered for layer: Elu yet.
Converting as custom op Elu elu1/Elu
name: “elu1/Elu”
op: “Elu”
input: “bn1/FusedBatchNorm_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Warning: No conversion function registered for layer: Tile yet.
Converting as custom op Tile anchors6/Tile
name: “anchors6/Tile”
op: “Tile”
input: “anchors6/Const”
input: “anchors6/Tile/multiples”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
attr {
key: “Tmultiples”
value {
type: DT_INT32
}
}

Warning: No conversion function registered for layer: Tile yet.
Converting as custom op Tile anchors5/Tile
name: “anchors5/Tile”
op: “Tile”
input: “anchors5/Const”
input: “anchors5/Tile/multiples”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
attr {
key: “Tmultiples”
value {
type: DT_INT32
}
}

Warning: No conversion function registered for layer: Tile yet.
Converting as custom op Tile anchors4/Tile
name: “anchors4/Tile”
op: “Tile”
input: “anchors4/Const”
input: “anchors4/Tile/multiples”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
attr {
key: “Tmultiples”
value {
type: DT_INT32
}
}

Hello,
It looks like the model you are converting to TensorRT contains an unsupported operation “Tile”.
For a list of supported operations, please reference: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#support_op

For unsupported layers, users can extend TensorRT functionalities by implementing custom layers using the IPluginV2 class for the C++ and Python API. Custom layers, often referred to as plugins, are implemented and instantiated by an application, and their lifetime must span their use within a TensorRT engine. https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending

regards,
NVIDIA Enterprise Support

Thank you so much