How to convert saved_model to onnx to run with Jetson Inference

Hi,

ONNX use INT8 as input format but TensorRT requires a float32 buffer.
Please check below comment to update the input data type via ONNX graphsurgeon.

Thanks.