Eager execution of the conditional graphs

Description

I prepared an ONNX model with a condition based on which either one branch of the graph is executed or another. The condition checks if a result of the reduce_max operation performed on the output of a subgraph is less than a provided constant. Then branch takes a lot of time to execute, and else is almost immediate. To make sure that the model performs lazy evaluation correctly, I ran some performance tests in both ONNX runtime, and TensorRT runtime (after conversion from ONNX using trtexec, and opset 13). I prepared two models (for each runtime) in which the condition is either always met (constant value is very low), or never met (constant value is too high). Although in ONNX the execution times are reasonable (a model with the condition never being met is much faster) which indicates that lazy evaluation is performed as expected, TensorRT clearly performs eager execution of both branches.

I have a question regarding the above - is this behavior expected? In the documentation I read that theoretically it should be lazily evaluated. The purpose of my model was to avoid computations that are not necessary and with eager execution this model doesn’t make any sense.

Environment

TensorRT Version: v8202
GPU Type: NVIDIA GeForce RTX 3090
Nvidia Driver Version: 510.54
CUDA Version: 11.6
CUDNN Version: 8.3
Operating System + Version: Ubuntu 20.04.3 LTS

Relevant Files

onnx_example_0.0.onnx (37.0 MB)
onnx_example_1.0.onnx (37.0 MB)
test_conditional_onnx.py (756 Bytes)

Steps To Reproduce

For the evaluation in ONNX, run the file test_conditional_onnx.py providing model_onnx_path = 'onnx_example_0.0.onnx' or model_onnx_path = 'onnx_example_1.0.onnx'. In the model onnx_example_0.0.onnx, the condition is never met which means much faster execution. In the model onnx_example_1.0.onnx the condition is always met, which means much slower execution.

However, after converting the above models to TRT using the following command:
trtexec --onnx=/path_to_onnx --explicitBatch --saveEngine=/engine_saving_path --workspace=2048 --device=X --fp16, both of them have very similar execution times, which means that the execution of the conditional branches is eager.

1 Like

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I’ve just updated my post by uploading two models, and a script to run them in ONNX. The difference between them lies in different conditional branches being executed. One of them with lazy evaluation should be much faster. This is the case in ONNX, but not after converting it to TRT.

Commands that I used to convert and run the models:

trtexec --onnx=onnx_example_1.0.onnx --explicitBatch --saveEngine=onnx_example_1.0.trt --workspace=2048 --device=1 --fp16 --minShapes=input_image_in:1x224x224x3 --optShapes=input_image_in:1x224x224x3 --maxShapes=input_image_in:1x224x224x3

trtexec --onnx=onnx_example_0.0.onnx --explicitBatch --saveEngine=onnx_example_0.0.trt --workspace=2048 --device=1 --fp16 --minShapes=input_image_in:1x224x224x3 --optShapes=input_image_in:1x224x224x3 --maxShapes=input_image_in:1x224x224x3

Inference times are almost the same. While running the script attached to my post, ONNX inference times for the model onnx_example_0.0.onnx are around 5-6 times lower than for the model onnx_example_1.0.onnx

I generated outputs of the above command trtexec with a flag --verbose for both models. However, I cannot upload them because of an error “new users cannot put links in posts”. The results can be easily reproduced though.

Hi,

We don’t think Myelin does eager execution of both the branches. Depending on the model, Myelin can potentially perform speculative execution but hard to say what’s going on in this model. We recommend you to please provide us Nsys trace for a better understanding to check if both branches are getting executed or if the slowdown is due to some other reason.
https://docs.nvidia.com/nsight-systems/UserGuide/index.html

Thank you.

Hi. I ran the following command:

nsys profile trtexec --onnx=onnx_example_0.0.onnx --explicitBatch --saveEngine=onnx_example_0.0.trt --workspace=2048 --device=0 --minShapes=input_image_in:1x224x224x3 --optShapes=input_image_in:1x224x224x3 --maxShapes=input_image_in:1x224x224x3

Unfortunately the generated report weights over 0.5 GB (for each of the two models). I cannot attach it to my comment as it is too big. How can I share it with you?

Thanks!

Could you please share via gdrive?

Sure, I uploaded everything here:
https://drive.google.com/drive/folders/1ymMarAtgAJ2HXHvfZjAnYZ-_iWZsg4w0?usp=sharing

The models are a bit different (static this time), but the outcome is the same. All models (both onnx and trt) are available in the shared directory.

Hey. Did you have a chance to look into this issue?

Hi,

We are tracking this issue internally.

Thank you.

2 Likes

Hi everyone. Will this be fixed? We don’t know if we should look for a different technology to achieve our goal. Please let us know, thanks in advance!

Hi,

This issue has been fixed, please try on the latest TensorRT version 8.5.2

Thank you.

1 Like

It works, thanks a lot!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.