Hello, I’m trying to run Movenet models (thunder and lighting) in TensorRT. I’ve converted the model from TensorFlow saved model to ONNX, and then, from ONNX to TensorRT (using trtexec) in 2 platforms, Jetson Nano (JetPack 4.6), and a Windows machine. I’ve successfully converted the model in both platforms, and then, loaded the engine in a TensorRT (python api) script.
The Issue is: when I run the inferences with the TensorRT model, the outputs are wrong, I mean, output confidence values are lower than 0.007 and the detected keypoints are in wrong positions when I draw them in the image.
To check the entire process (preprocessing, model inferences and drawing the inferences in the image) I’ve developed 2 scripts, the first one performs the inference in the original tensorflow model (reading the .pb), that worked well, scores are above 0.8 and keypoints are in correct positions in the image.
The second script performs the same pipeline, but with ONNX library in python, loading the converted ONNX model, this one also works well (scores above 0.8 for keypoints, and keypoints are correct in the image). I don’t know why only the TensorRT model is doing wrong in both machines and, in both apis (python and C++ api)
To check if was a data type problem, I’ve developed the same pipeline using TensorRT in C++ api , I can load the model, show some properties (binding sizes, number of bindings) all seems good, but, again, in both machines the C++ inferences are wrong (very low scores and the points are drawn wrong).
TensorRT Version: 126.96.36.199
GPU Type: Jetson Nano tegra210, and RTX3070ti
Nvidia Driver Version: 516.94 in windows
CUDA Version: 10.2.300 in jetson, 11.2 in windows
CUDNN Version: 188.8.131.52 in both machines
Operating System + Version: Windows 10, Ubuntu 18.04
Python Version (if applicable): python 3.9(Windows) and python 3.6 (Jetson)
Model (tf version , onnx and my windows machine TRT version)
converting the tensorflow model with tf2trt:
$ python -m tf2onnx.convert opset 15 --saved_model (path) --output movenetThunder.onnx
and then converting the ONNX model with trtexec with:
$ trtexec.exe --onnx=model.onnx --saveEngine=output.plan