I don’t know how to describe this situation briefly in the topic…
I’m making a DLL with TensorRT to call from C# for speeding up the inference process.
I called the function
inf_dllTRT_c1() for inference from C#.
the json file is for some setting like model path, size of model…etc. for DLL.
(You can see the codes in the appendix zip file below for more details)
The thing is, I’ve run an inference while initializing the settings in the
init_inference() function in my code,
and it shows that, the very-first time’s inference took longer time than the afterwards inference.
If I run the
DoInference() function for 5 times,
the first time took about 37ms,
and the 2, 3, 4, 5 time took about 27~32 ms for each.
If this case is in an .exe file,
it can work properly with the speed up inference after the first run;
but if it’s in a DLL,
it appeared that the speed up inference only last for only a few run(after a continuous run for several time in a for loop),
and the latter’s run turned to the speed with the very-first run while initializing.
Is this caused by some object didn’t reuse during the inference process in my code?
TensorRT Version: 184.108.40.206
GPU Type: RTX 2080TI
Nvidia Driver Version: 451.82
CUDA Version: 10.0
CUDNN Version: 7.6.2
Operating System + Version: Windows10 1903
Python Version (if applicable): 3.7.0
TensorFlow Version (if applicable): 1.13.1
PyTorch Version (if applicable): 1.6(?
Baremetal or Container (if container which image + tag): -
Seems like installing cognex is necessary to run my code in C#,
so I’ll just provide the .cs file instead of the project file, Here’s the link:
Thanks in advance for any help or advice!