TF ssd_mobilenet_v2_coco_2018_03_29 model to TRT conversion unsupported CAST layer

Hi all,

I’ve been playing around since a week now trying to get my custom trained ssd_mobilenet_v2_coco_2018_03_29 model running with TensorRT.
I’ve seen a few successful attempts here but I can’t understand how they managed to bypass the unsupported CAST operation.

Basically my graph is full of CAST layers, apparently not supported (but soon to ?) by TensorRT.
How is it that my model contains CAST layers when it does not appears to be the case for everyone ?

I’ve downloaded the model from the official tensorflow model zoo , retrained it following the official tutorial, and froze it using the official export script.
Is it depending on some parameters ? Tensorflow version ? …

If anyone can shed light on this that would be much appreciated !

Regards.

Hi, try to use branch r1.13.0 or any older branch in object detection API. Master branch uses tf.Cast instead tf.ToFloat when freezing the model, and it causes the error.

Hi,

thanks for the answer.
I did manage to export it with tensorflow 1.12.
Now I have a protobuf error when executing the sample app :

&&&& RUNNING TensorRT.sample_uff_ssd # ./sample_uff_ssd_debug
[I] ../data/ssd/sample_ssd_relu6.uff
[I] Begin parsing model...
[libprotobuf FATAL /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google_private::protobuf::FatalException'
  what():  CHECK failed: (index) < (current_size_):
Aborted (core dumped)