Something wrong with my Jetson nano for converting a TensorFlow model to an ONNX model

I encountered an issue when using tf2onnx to convert a TensorFlow model to an ONNX model.


I know it might be a version issue with TensorFlow, but on the Jetson Nano, I see that the official site only provides two versions of TensorFlow for Python 3.6. Is there any way to solve this issue? Please help me, thank you.

The software versions are as follows:
jetpack:4.6
tensorflow:2.5
tf2onnx:1.14

Hi,

Do you want to infer the model with TensorRT?
If yes, you don’t need to apply the conversion on the Jetson platform.

ONNX is a portable model format so you can copy the generated ONNX to Jetson after you convert it on a desktop environment.

Thanks.

Thank you for your reply.
However, I want to implement online training, transformation, and inference on the Jetson platform. Is there a way to achieve this?
Thanks.

Hi,

Training takes lots of resources so it is more suitable for a desktop GPU.
If you prefer to do that on the Jetson, PyTorch might be a better framework as it takes fewer resources to work.

Thanks.

Thank your suggestion.
But my problem now is not training, it is that after the training is completed, I want to convert the model to onnx format after saving it as tenosrflow, and there is a problem, the problem is in the picture I gave at the beginning. Do you have a solution?

Hi,

The conversion doesn’t need to be done on Jetson.
So you can convert the model to ONNX on a desktop environment and find some compatible library.

If the issue keeps going on, it’s recommended to check with the tf2onnx team to see if your model is supported.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.