i’m trying for some time now to optimized a deeplab v3+ model (the original tf model) using tensorRT without luck. i keep getting errors on unsupported layers in uff (resize for instance). has anyone managed to convert a deeplab model using uff and tensorRT?
if so i’d really appreciate if he\she could share how.
I can not convert it through uff either, and I use TensorFlow integration with TensorRT (TF-TRT) to optimize. However, the acceleration effort is limited on TX2.