Serialized Model inference issue in Xavier

Hello ,some trouble in my platform.

I use Xavier for algorithm development ,
used yolov3 for detection.

To run this model I transfer Pytorch → onnx → trt

The first time I run this model ,that I just parse onnx and build engine with official interface , the inference time is normal .
So I serialized the model to binary file

Actually, when I deserialized and rerun model again , there must be a operation taking long time to run .
I check the inference time in each operation using profiler, there would be one layer that in fixed position that cost long time , not all layer.
And I repeatly run the same inference , the large time cost disappeared after about 10 time .

So Why and how to solve this problem ?

For I would use two stage network in future work ,this will have bad influence.

Hi,

We want to reproduce this on our environment.
Would you mind to share the onnx model and the profiling source with us?

Thanks.

I could share one .
How could I sent it for you?

Hi,

Please attach it to the topic directly.
If public sharing is not an option for you, you can send the link through private message.

Thanks.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Do you fix this issue yet?
If not, would you mind to share the model with us?

Thanks.