[TensorRT] WARNING: onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While running my SSD ONNX model to create, I get the error mentioned above. The ONNX model has been generated using the following snippet of code.
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
def build_engine(model_path):
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(flags = 1) as network,
trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_workspace_size = 1<<30
builder.max_batch_size = 1
builder.fp16_mode = 1
with open(model_path, "rb") as f:
value = parser.parse(f.read())
print("Parser: ", value)
engine = builder.build_cuda_engine(network)
return engine
I am using the above function to create my engine.
My ONNX model has float weights.
So:-
- Why has my ONNX model been generated with INT64 weights?
- Would there be any loss in accuracy?