Failed to convert .pb model to .trt model by following the steps provided in samples/python/uff_ssd

Description

My .pb model has FusedBatchNorm layer. I have successfuly converted the .pb model to .uff model, but I failed to convert to .trt model. I got the following error messages when I tried to build_cuda_engine():

[TensorRT] ERROR: cr/bn0/Const: constant weights has count 0 but 1 was expected
[TensorRT] ERROR: UffParser: Parser error: cr/bn0/FusedBatchNorm: Invalid Batchnorm inputs for layer cr/bn0/FusedBatchNorm

So I took a look at the batchnorm layer of my .pb model and .uff model. I noticed both of the mean and variance have a size of 0 as shown in the images attached.


It seems the UffParser needs the mean and variance of the batchnorm layer have a size of 1 (for example, mean = [0], variance = [1]). Could anyone help me resolve this? Thanks in advance.

Environment

TensorRT Version: 7.0.0.11
GPU Type: 2080Ti
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 1.14

Hi @semiswiet,
UFF Parser has been deprecated from TRT7 onward,
Hence request you to follow either of the two ways to generate your trt engine.
pb << ONNX <<TRT
TF-TRT
Thanks!

Thank you AakankshaS. I will try these