TRT-TF Integration

Tensorflow has TF-TRT integration which converts all the nodes it can to TensorRT operations, the rest it defaults to TF. Is there any work in TRT-TF integration which gives the user an option to default to TF operations(using tensorflow .so file) instead of writing custom operations for each unsupported layer? My current model has too many unsupported operations, and it would be a nightmare to manage given the recent issue I faced with no backward compatibility with custom plugins

Hello,

Correct. TensorFlow calls TensorRT to execute TensorRT optimized nodes. TFTRT falls back to native TF if it hits an unsupported op.

https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html

@NVES my question is that, is there something in the pipeline for future TensorRT updates which allows me to do the following :

Deploy on DRIVE/Jetson platform using .uff workflow with nvinfer, such that I don’t have to write custom plugins. TensorRT falls back to Tensorflow for unsupported operations, its just opposite to how TF-TRT integration works now(like you mentioned). This would be useful because I can’t(don’t know the right way) install tf directly on the embedded platform and deploy the .pb file