Having real difficulty deploying my PB tensorflow model to my AGX Xavier for inference

Hello

I am having real difficulty understanding the workflow beyond training a model in TensorFlow to get it deployed for inference to the AGX Xavier.
I currently have a trained model in .pb format and I’m looking to use the jetson.inference code for object detection with a live camera feed.

As a complete novice with this technology my understanding of the steps required are as follows:

  • freeze the tensorflow .pb graph file and convert it to onnx. I have managed to do this using the tf2onnx.convert command for python. I suspect I have incorrect input parameters though because my resultant onnx file is only 377 bytes.

  • Next stage is to create a TensorRT engine and model using the built-in trtexec command. Using my very rather small onnx file, I am getting “Unsupported ONNX data type: UINT8 (2)”.

Thanks in advance for any support or advice.

Regards
Patrick

P.S. Sorry if I’ve got some of my terminology wrong above - as I said this is all new to me and I’ve only been working with it for a few weeks.

Hi,

1. A possible reason is the input/output layer is set incorrectly.
You can check your model with summarize_graph tool and pass the corresponding input/output to generate the onnx model.
After generating the ONNX model, you can visualize it with https://netron.app/ to see if everything correct.

2. This error is a known issue from the ONNX model.
ONNX, by default, uses INT8 for image data in the recent opset version.
However, you will need a floating type to use TensorRT.

To solve this, please check the below comment to convert it with our graphsurgeon API:

Thanks.