Hello
I am having real difficulty understanding the workflow beyond training a model in TensorFlow to get it deployed for inference to the AGX Xavier.
I currently have a trained model in .pb format and I’m looking to use the jetson.inference code for object detection with a live camera feed.
As a complete novice with this technology my understanding of the steps required are as follows:
-
freeze the tensorflow .pb graph file and convert it to onnx. I have managed to do this using the tf2onnx.convert command for python. I suspect I have incorrect input parameters though because my resultant onnx file is only 377 bytes.
-
Next stage is to create a TensorRT engine and model using the built-in trtexec command. Using my very rather small onnx file, I am getting “Unsupported ONNX data type: UINT8 (2)”.
Thanks in advance for any support or advice.
Regards
Patrick
P.S. Sorry if I’ve got some of my terminology wrong above - as I said this is all new to me and I’ve only been working with it for a few weeks.