We haven’t tried, let’s try with the onnx runtime.
Up until now we have used the .pb model (tensorflow saved model) and it works quite well on the jetson but it takes a long time to load.
Thanks we look forward to your news
Hi AastaLLL,
we compiled the model with fixed size (both for image_input and template_input). In this manner all the pipe (pb → onnx → trt) works.
But the problem is that we need at least 30 models ready and right now it is possible to make ready to use 15 models (for reasonable memory limit). If you’ll able to trnsform in trt the model with dynamic shapes ready it will be fantastic.
Any news?
Hello,
First of all we thank you for your help.
We compile the model with fixed dimensions from tensorflow, setting the two input layers with fixed dimensions and then regenerate the onnx model.
As already mentioned, the dimensions are variable, however there are two images, one larger (maximum size 800x600x3) and one smaller (maximum size 200x200x3)
Thanks for the advice.
We tried scaling the input but doing so the performance (accuracy in the detection) worsens.
We are currently using the ploy of generating a maximum of 6 models with the most representative input sizes.
Surely having only one model with the dynamic shape would have been great!