Tutorials on how to deploy AI model on Jetson Xavier NX starting from h5 or pb files

Description

Hi!
I’ve developed and trained a VGG-UNet model with a custom dataset on Google Colab. After training I stored the model in h5 and pb formats and now I would like to deploy it into my Jetson Xavier NX for real time usage. As far as I understood, I should convert the pb file into uff or tensorRT formats and then run them with the Jetson. Unfortunately, I’m not able to perform such conversion, can you please help me on this? Are there any tutorials on how to perform such conversion and how to run the final uff or tensorRT models on stored or streaming images?

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

You can try converting Keras → ONNX → TensorRT.
I hope the following documents may help you.

Thank you.

Hi, thank you for reaching out to me.
I’m trying to follow the approach you suggested but I’m encountering a few issues in the Keras → ONNX conversion.

I followed the installation steps on the GitHub you indicated and there seems to be some conflicts between tensorflow-cpu-aws and onnx requirements on protobuf (e.g., tensorflow-cpu-aws is requiring protobuf<3.20,>=3.9.2 but if I install protobuf 3.19.0 then it’s saying that this version is not compatible with the one required by onnx which should be protobuf<4, >=3.20.2).

Then, even if we skip this and keep the standard version of protobuf installed with the instruction in the GitHub (which is protobuf 3.20.3) and we try to run the converter from command line it shows an attribute error for numpy (module ‘numpy’ has no attribute ‘typeDict’). I found that this is related to the fact that the name ‘typeDict’ is deprecated and should be use with older versions of numpy (not the one I have, numpy-1.24.0). So I downgraded numpy to numpy-1.21, but then another error appears: RunTimeError: module compiled against API version 0x10 but this version of numpy is 0xe.

Do you have any suggestion on how to solve this?
Thank you!

Hi,

We are moving this post to the Jetson Xavier NX forum to get better help on setup related.

Thank you.

Hi,

Since pb/uff flow is deprecated, please try the ONNX flow as mentioned above.

Did you train the model on Jetson or a desktop environment?
Since ONNX is a portable format, you can apply the conversion on an x86 environment.
Usually, it’s much easier to find compatible software combination.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.