Description
I’d like to make inference by inputing different image sizes, e.g. [3,1024,1024] [3,2048,2048] to a int8.quantized.trt-engine.
I’m trying to use “profile = builder.create_optimization_profile()” so that I set diffetent input shape to my model. But, it does not work.
Error I got is below
2023-01-19 19:16:09 - __main__ - INFO - TRT_LOGGER Verbosity: Severity.ERROR
2023-01-19 19:16:14 - __main__ - INFO - Setting BuilderFlag.FP16
2023-01-19 19:16:14 - __main__ - INFO - Setting BuilderFlag.INT8
2023-01-19 19:16:14 - __main__ - DEBUG - === Network Description ===
2023-01-19 19:16:14 - __main__ - DEBUG - Input 0 | Name: Input_0 | Shape: (1, 3, 2048, 2048)
2023-01-19 19:16:14 - __main__ - DEBUG - Output 0 | Name: Convolution2DFunction_1 | Shape: (1, 256, 255, 255)
2023-01-19 19:16:14 - __main__ - DEBUG - === Optimization Profiles ===
2023-01-19 19:16:14 - __main__ - DEBUG - Input_0 - OptProfile 0 - Min (1, 3, 512, 512) Opt (1, 3, 1024, 1024) Max (1, 3, 2048, 2048)
2023-01-19 19:16:15 - ImagenetCalibrator - INFO - Collecting calibration files from: /mnt/d/calibration/imagereal_20/
2023-01-19 19:16:15 - ImagenetCalibrator - INFO - Number of Calibration Files found: 20
2023-01-19 19:16:15 - __main__ - INFO - Building Engine...
/mnt/c/Users/onnx_to_tensorrt.py:280: DeprecationWarning: Use build_serialized_network instead.
engine = builder.build_engine(network, config)
[TensorRT] ERROR: 4: [network.cpp::operator()::2736] Error Code 4: Internal Error (Input_0: kMIN dimensions in profile 0 are [1,3,512,512] but input has static dimensions [1,3,2048,2048].)
2023-01-19 19:16:15 - __main__ - INFO - Serializing engine to file: test.int8.engine
Traceback (most recent call last):
File "/mnt/c/Users/onnx_to_tensorrt.py", line 293, in <module>
main()
File "/mnt/c/Users/onnx_to_tensorrt.py", line 283, in main
f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'```
If someone knows proper settings, please teach me.
best regards
## Environment
**TensorRT Version**:
**GPU Type**:
**Nvidia Driver Version**:
**CUDA Version**:
**CUDNN Version**:
**Operating System + Version**:
**Python Version (if applicable)**:
**TensorFlow Version (if applicable)**:
**PyTorch Version (if applicable)**:
**Baremetal or Container (if container which image + tag)**:
## Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
## Steps To Reproduce
<!-- Craft a minimal bug report following this guide - https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports -->
Please include:
* Exact steps/commands to build your repro
* Exact steps/commands to run your repro
* Full traceback of errors encountered