[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time

Hi,

I am using Jetson Nano with Jetpack 4.2

Tensor RT Version : 5.0.6.3

Tensorflow version: 1.13.1 (Nano and Machine on which I export froz)

CUDNN version : 7.3.1

I have downloaded SSD Mobilenet V2 model from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

I have changed num_classes to 1 since I have only 1 class.

I have commented “batch_norm_trainable: true” this line since I was getting error for this.

Also, made changes related to checkpoints and label.pbtxt.

I have exported graph using script export_inference_graph.py from research/object_detection/ (Not done on Jetson Nano) .

In file config/model_ssd_mobilenet_v2_coco_2018_03_29.py I made following changes:

i) Changed frozen file path (Generated by export_frozen_graph.py)

ii) Change numClasses to 2 since I have only 1 class.

And then run main.py

I got following error:

[TensorRT] ERROR: UFFParser: Validator error: Cast: Unsupported operation _Cast

So, I added following line :
graph.remove(“Cast”, remove_exclusive_dependencies=False)

But I got the error,

“[TensorRT] ERROR: UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time”

Please help.

Hi,

It looks like your model includes a non-supported layer: Cast.
It may be added by TensorFlow automatically as an utility operation.

It’s recommended to check how to remove this operation in TensorFlow when retraining the model.
To remove it directly from graphsurgeon may lead to some unexpected issues.

Thanks.

I solved the Problem with the -1 dimension, by exporting my inference Graph with a fixed Batch size of 1 [1,None,None,3].

but then I got the Error
ERROR: UFFParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3

Hello @Tonto5000

Which version of Tensorflow you used to train the SSD Mobilenet V2 model ?

Also, Are you using Jetpack 4.2 with Tensor RT version 5.0.6.3 ?

I have used “pipeline.config” included in tar.gz from link below for training.

http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz

Because I am getting this error.

[TensorRT] ERROR: UFFParser: Validator error: Cast: Unsupported operation _Cast

I have tried with below API to remove training nodes but it doesnt help.

tf.compat.v1.graph_util.remove_training_nodes(
    frozen_graph,
 protected_nodes=None
)

Can you please help?

Hi, miteshp.patel

Based on comment#3, maybe you can also try to fix the batchsize to 1 instead of -1.
Thanks.

I am having this exact same issue training a new Model in Tensorflow based on the checkpoint from model ssd_inception_v2_coco_2017_11_17 in order to use it with TensorRT.

My goal is to create an Object Detection model from custom COCO dataset for use with TensorRT.

I am getting the Cast error and then the other error when I remove Cast operations.

How do I remove CAST operations from the training of a new model with python3 object_detection/model_main.py?
Along with anything else to make it TensorRT compatible.

Also what is the best newest way to make an object detection model that works with TensorRT?

Fixed by bottom comment, using an older version of tensorflow/models to export the model

https://devtalk.nvidia.com/default/topic/1043557/error-uffparser-parser-error-boxpredictor_0-reshape-reshape-1-dimension-specified-more-than-1-/