Sorry, I pose a question again, which bothers me for a long time.
In official docs https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#support_op, tensorrt supports
FusedBatchNorm op, but it does not tell it belongs to which type of
BatchNorm layer ,
tf.slim.batch_norm layer ?
In my model, I use
tf.slim.batch_norm, but when converting to UFF file, it shows error which many tensorrt users have encountered:
[TensorRT] INFO: UFFParser: parsing model_0/resnet_v1_50/conv1/batch_normalization/FusedBatchNorm [TensorRT] ERROR: Parameter check failed at: Utils.cpp::reshapeWeights::71, condition: input.values != nullptr [TensorRT] ERROR: UFFParser: Parser error: model_0/resnet_v1_50/conv1/batch_normalization/FusedBatchNorm: reshape weights failed!
It seems that tensorrt does not support
When I use
tf.layer.batch_normalization instead of
tf.slim.batch_norm, meanwhile not setting the
training parameter(the default value is False), it works. But when I train a model with setting the
training parameter to
True, then I transfer the
ckpt trained-model to
uff file and then parse the
uff file, it still shows the above error.
So, tensorrt supports which type of
Linux distro and version:
LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.4.1708 (Core) Release: 7.4.1708 Codename: Core
GPU type: Tesla v100 nvidia driver version: NVIDIA-SMI 396.44 CUDA version: 9.0 CUDNN version: 7.3.0 Python version [if using python]: python2.7 TensorRT version: 22.214.171.124 gcc>5.3/lib64
Looking forward your reply.