I’ve developed and trained a VGG-UNet model with a custom dataset on Google Colab. After training I stored the model in h5 and pb formats and now I would like to deploy it into my Jetson Xavier NX for real time usage. As far as I understood, I should convert the pb file into uff or tensorRT formats and then run them with the Jetson. Unfortunately, I’m not able to perform such conversion, can you please help me on this? Are there any tutorials on how to perform such conversion and how to run the final uff or tensorRT models on stored or streaming images?
TensorRT Version: GPU Type: Nvidia Driver Version: CUDA Version: CUDNN Version: Operating System + Version: Python Version (if applicable): TensorFlow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Hi, thank you for reaching out to me.
I’m trying to follow the approach you suggested but I’m encountering a few issues in the Keras → ONNX conversion.
I followed the installation steps on the GitHub you indicated and there seems to be some conflicts between tensorflow-cpu-aws and onnx requirements on protobuf (e.g., tensorflow-cpu-aws is requiring protobuf<3.20,>=3.9.2 but if I install protobuf 3.19.0 then it’s saying that this version is not compatible with the one required by onnx which should be protobuf<4, >=3.20.2).
Then, even if we skip this and keep the standard version of protobuf installed with the instruction in the GitHub (which is protobuf 3.20.3) and we try to run the converter from command line it shows an attribute error for numpy (module ‘numpy’ has no attribute ‘typeDict’). I found that this is related to the fact that the name ‘typeDict’ is deprecated and should be use with older versions of numpy (not the one I have, numpy-1.24.0). So I downgraded numpy to numpy-1.21, but then another error appears: RunTimeError: module compiled against API version 0x10 but this version of numpy is 0xe.
Do you have any suggestion on how to solve this?
Since pb/uff flow is deprecated, please try the ONNX flow as mentioned above.
Did you train the model on Jetson or a desktop environment?
Since ONNX is a portable format, you can apply the conversion on an x86 environment.
Usually, it’s much easier to find compatible software combination.