TensorRT Optimisation Fusing Layers Causes Slightly Different Results

Description

I recently upgraded from TensorRT 6 to TensorRT 7 and noticed that the tensor output node data obtained, from the same input data, were slightly off (about a 2% difference from results of TensorRT 6 and 4% off from expected). After enabling the debug logger in TensoRT 7 and comparing to TensoRT 6, I found that the primary difference was that 2 of my Caffe Model layers (i.e. convolution and eltwise) were being fused together during the optimisation phase.

I therefore proceeded by specifically marking this convolution layer for capture, which allowed my TensorRT network to skip the fusion and gave me similar results to TensorRT 6.

I am confused why fusing 2 layers would yield such different results.

Environment

TensorRT 7:
CUDA 10.2:
CUDNN 7.6.5.32-1:
Ubuntu 18.04 64 bit:

1 Like

Hi,

Could you please share the model and sample script file to reproduce the issue so we can help better?

Thanks

Its a private trained model prefer not to share it. Is there a method to disable this fusion ?

Can you try latest TRT 7.1 EA release and let us know if you are facing similar issue on latest TRT release as well?
If issue persist, please share the debug verbose logs.

Thanks

Do you know where I can find the 7.1 EA release ? I am following the install page and searching for 7.1 release on the download Tensorrt 7x page but I am not finding the installs for 7.1.

Sorry, I missed to update this.
TRT 7.1 EA release are applicable to NVIDIA® Jetson™ Linux for Tegra™ users only.
Please stay tuned for new TRT release update.

Thanks

Perfect, I will wait for TRT 7.1 on Ubuntu and test it. Following this I will post the logs if I still experience similar issues. For now I am at least able to by-pass the fusion by requesting to capture the output which is being fused.

Do you have a rough idea on when 7.1 EA will be available ?

Thank you for all the help!

I don’t have info regarding TRT 7.1 release dates. Will update you as soon as I get any updates.
Request you to please stay tuned to NVIDIA TRT website.

Thanks