Hi, We have developed a TF-Keras model and the weights are converted to tensorRT through ONNX. During the conversion we get a warning saying that it is downcasting to INT32. The obtain tensorRT embeddings are completely different from the tensorflow embeddings. I have attached the ONNX file here. Can you suggest a possible solution for this. We are using tensorRT 8.2
sample.onnx (11.0 MB)
Hi,
Usually, it is a harmless warning since we don’t support the INT64 data type.
It might affect some results but should be minor.
Did you get the incorrect output after running the inference with TensorRT?
Thanks.
Hi,
Thanks for the reply. We are getting different result altogether. But when we are running in colab which is tensorrt 8.5 we are getting the same result. But in Nano , we are using tensorrt 8.2 where we are getting wrong results. What might be the problem ?
Thanks in advance
Hi,
We want to reproduce this issue to get more information.
Could you also share the source code with us as well?
Thanks.
Hi,
Sorry for the late update.
Could you help to check the output of ONNXRuntime as well?
Please check if you can get the expected result with ONNXRuntime.
If not, there might be some issues when converting the TensorFlow model into the ONNX format.
Thanks.
Hi,
OnnxRuntime is producing the same result as TF-Keras model. We are getting errors when we are converting the onnx to engine
Thanks in advance
Hi,
Could you also share the script that was used for testing ONNXRuntime with us?
Thanks.
Hi
We used the following script to test the onnx
sess = rt.InferenceSession(“sample.onnx”)
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
pred_onx = sess.run([label_name], {input_name: dep.astype(np.float32)})[0]
Thanks
Hi,
We are trying to reproduce this issue in our environment.
Will share more information with you once we got some progress.
Thanks.
Thank you…Waiting for the positive reply
Hi,
We can reproduce this issue and check with our internal team.
Will share more information with you once we got feedback.
Thanks.
Hi,
We have confirmed this issue is fixed in TensorRT 8.4.
Unforunteanly, Nano only has TensorRT 8. release currently.
Thanks.
Hi
Thanks for the reply. So we cannot update 8.4 with the bug fix?
Hi,
For now, Jetson Nano is with TRT 8.2 in Jetpack 4.
For TRT 8.4, please consider using Jetpack 5 + Xavier or Orin series.
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.