TensorRT engine - Impossible to parse UFF file

Hi all,

I try to perform the tutorial https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification with my Nano

I’m particulary focus on inception network.

When I m trying to convert my frozen pb file into UFF, i got this kind of warning :

Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_7c/Branch_0/Conv2d_0a_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3

But finally UFF file is created
Then when I m trying to building the engine of Tensorrt :I got a parser error :

UffParser: Validator error: InceptionV3/InceptionV3/Mixed_7c/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3
Failed to parse UFF

How could I get through this error ?

I m using Jetson Nano with the JetPack 4.2

Tensorflow version : 1.13.1
Uff version : 0.6.3
Tensor RT version : 5.1.6.1

Hi,
Please check the following post:
https://devtalk.nvidia.com/default/topic/1066445/tensorrt/tensorrt-6-0-1-tensorflow-1-14-no-conversion-function-registered-for-layer-fusedbatchnormv3-yet/post/5403567/#5403567
The layer is not supported and please refer to the post and see if you can convert the model successfully.

Hi,
Please also check:
https://devtalk.nvidia.com/default/topic/1070045/jetson-nano/tf-trt-vs-tensorrt/post/5421433/#5421433
It has some guidance if there are unsupported layers in the model.

Thanks for the feedback.
I definitely go with ONNX conversion

I faced some issue of parsing with ONNX (stop the parsing at the first layer)
I forgot to specify the batch size ( let it to None ) before freezing the model

I suceed to create a Tensorrt engine
I will test inference

@stephane.lestienne i have the same problem same error can you help me out how did you solve this issue any source reference link or code if it is possible. any help will be appreciated

@maazqureshi446

I solve this issue as follow

I load my H5 model , modify the first layer to have a fixed batch size then freeze it

Convert this frozen model in ONNX

Use the ONNX parser to build an Tensor RT engine