Low output feature similarity between trtexec and onnx


I’m comparing the MODNet output feature between onnx and trtexec with zeros input and getting the cosine similarity only 0.00029 but the result looks great.

Do anyone have any idea of this issue?


TensorRT Version:
GPU Type: NVIDIA 3080
Nvidia Driver Version: 472.12
CUDA Version: 11.1
CUDNN Version: 8.1.1
Operating System + Version: Windows 10

Relevant Files

onnx_zeros.py (1.2 KB)

Steps To Reproduce

Using the trtexec command as below

trtexec.exe --onnx=modnet.onnx --minShapes=‘input’:1x3x192x352 --optShapes=‘input’:1x3x192x352 --maxShapes=‘input’:1x3x192x352 --loadInputs=‘input’:input_seg.txt --exportOutput=output_seg.txt --fp16

The input is all zero.

Using the onnx_zeros.py load onnx model and produce the output feature.

Hi @jackgao0323,

Could you please share us input_seg.txt.
Also we recommend you to please try on the latest TensorRT version 8.2 EA

Thank you.

Hi @spolisetty ,
Here is the input_seg.txt.


We could run trtexec command and results generated.
Could you please give us more details, as you mentioned results are great, do you mean results are correct ? then have you cross verified cosine similarity calculation part. And for onnx output are you using onnx-runtime ?

Thank you.

What is cross verified mean? MODNet’s output will use as alpha directly.
I use onnx output by onnx-runtime.


We meant to please make sure cosine similarity calculation is working fine. And as you mentioned result looks great, could you please let us know are you getting correct results ? it’s not so clear.

Thank you.

We use the cosine similarity on other models and it seems working fine. I think is not the problem of cosine similarity.
MODNet is a image segmentation model. We input an image and the result looks correct. I’m curious about the phenomenon of low cosine similarity.


We are looking into this issue, please allow us some time.

Thank you.


We recommend you to please try using TensorRT 8.2 EA version.
But there is some fault in the model as following.

With polygraphy inspect model, we can see the following:

[I] ==== ONNX Model ====
    Name: torch-jit-export | Opset: 11
    ---- 1 Graph Input(s) ----
    {input [dtype=float32, shape=(1, 3, '-1', '-1')]}
    ---- 1 Graph Output(s) ----
    {output [dtype=float32, shape=(1, 1, '-1', '-1')]}

Note the quotes around the -1s; this means that those are named dims, and since the names are the same, should be equal.
This is obviously not the case, since we’re using input dimensions where H != W.

Assuming the intention was (1, 3, -1, -1) rather than (1, 3, '-1', '-1'), we can fix this pretty easily with surgeon sanitize:

polygraphy surgeon sanitize modnet.onnx -o modnet_fixed.onnx --override-input-shapes input:[1,3,-1,-1] --no-shape-inference

which gives us:

[I] ==== ONNX Model ====
    Name: torch-jit-export | Opset: 11

    ---- 1 Graph Input(s) ---- 
    {input [dtype=float32, shape=(1, 3, -1, -1)]}

    ---- 1 Graph Output(s) ---- 
    {output [dtype=float32, shape=()]}

This model, as expected, works fine. So please fix the model and try on 8.2 EA.

This code we used to generate zero input file.

#include <vector>
#include <fstream>
using namespace std;

int main()
    size_t sz = 1*3*192*352;
    vector<float> x(sz, 0.F);
    ofstream f("zeros.txt", std::ios::binary);
    if (f.is_open())
        f.write(reinterpret_cast<char*>(x.data()), sz*4);
    return 0;

Thank you.