Increase in Latency after Optimising Deeplab model using Tensor rt

We are trying to benchmark the Tensor rt optimization on Segmentation Models(In Tensorflow framework) Namely, Hardnet and Deeplab using trtexec .after some tryouts we were able to get optimized engines for Hardnet Models with better performance,but in case of Deeplab models their is an increase in latency(from 300.3921 ms to 5400 ms) after optimization.

While building the engine for the deeplab model we got a warning " TensorRT currently ignores the "seed"field in RandomUniform op. Random seeds will be used ",

we have got couple of questions

  1. Is this warning and subsequent actions taken by tensor rt is inducing the increase in latency?
    If it is, will removing this operation( RandomUniform ) would be a right approach?

  2. If this warning is not related to hike in latency, what should be our approach to debug this issue.

Environment

TensorRT Version 7.2.3.4:
GPU Type Tesla T4:
Nvidia Driver Version 460.32.03:
CUDA Version 10.2:
CUDNN Version 8.1.1.33:
Operating System + Version= Ubuntu 18.04:
Python Version (if applicable) 3.7:
TensorFlow Version 2.5.0:

Hi @ajay4 ,

Can you share the code for conversion from .pb to onnx adn to trtexec ?

Also, check ONNX model using checker function and see if it passes?
import onnx
model = onnx.load(“model.onnx”)
onnx.checker.check_model(model)

If issue persist, could you please share the ONNX model so we can better help.

Thanks

Hi,
We recommend you to check the below samples links, as they might answer your concern
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#samples
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-722/quick-start-guide/index.html#framework-integration
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#integrate-ovr
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usingtftrt

If issue persist, request you to share the model and script so that we can try reproducing the issue at our end.
Thanks!

Hello @bgiddwani
i have used t2onnx.convert command to convert from frozen graph to Onnx,
and then i was trying to benchmark it using trtexec.
here is the snippet
!python -m tf2onnx.convert --graphdef “/content/drive/MyDrive/new_graph.pb” --output model.onnx --inputs input_1:0 --outputs activation_81/truediv:0
!./trtexec --onnx=/content/model.onnx --explicitBatch --minShapes=‘input_1:0’:1x1200x1200x3 --optShapes=‘input_1:0’:1x1200x1200x3 --maxShapes=‘input_1:0’:1x1200x1200x3 --fp16 --inputIOFormats=fp16:chw --outputIOFormats=fp32:chw --shapes=‘input_1:0’:1x1200x1200x3 --saveEngine=/content/model_fp16.trt --workspace=10000

can you share me your email so that i can share you the model.

Hi @ajay4,

You can DM us ONNX model and complete verbose logs using google drive or other method.

Thank you.

Hi @ajay4, can you send the details to @spolisetty ?

Hello @bgiddwani can you share his/her email where i can share the details?

@ajay4,

I sent a message with mail id.

Thank you.

@ajay4

  1. Can you please check and confirm if RandomUniform is utlized exactly in which layer? Does that layer supports TensorRT optimization?

  2. Please check trtexec logs and confirm that Graph Callibration is carried out or Graph Inference?

Hello @ajay4
In the code snippet you have specified :
–inputIOFormats=fp16:chw --outputIOFormats=fp32:chw

Can you please set --outputIOFormats=fp16:chw and analyze the performance once .

Hello @kantasaroja
I Have tried changing the OutputFormats it did not work out.can you suggest me anythimg else

Hello @aryan.gupta18
According to the Logs and Documentation TensorRt does support the RandomUniform layer,but seeding (a parameter passed to RandomuniformOp in onnx format which will Generate same sequence of Random Numbers every time) is not supported.
that is why i am getting a warning message saying TensorRT currently ignores the "seed"field in RandomUniform op. Random seeds will be used
This should not be an issue right?
I have checked the Logs graph Inference is carried out.

Hi @ajay4,

Are you still facing this issue