Inference Operator Supported Data Types

Is there a way to use int64 dtype tensors within Holoscan deployed ONNX or TensorRT models for inference?

The Holoscan documentation for the Inference operator mentions it currently supports float32, int32, and int8 dtype inputs:
https://docs.nvidia.com/holoscan/sdk-user-guide/inference.html

However, ONNX-based models utilizing many common operations, such as Reshape and Resize utilize int64 dtype Tensors. These are listed as supported in TensorRT’s docs on GitHub.

Is performing inference using an ONNX or TRT model containing Tensors of the int64 dtype possible via a custom Inference operator using the API described in the documentation? Or is there no possible way to use int64 dtype Tensors in Holoscan?