Description
I use the Trt8.0 quantization function in my model. I just emplyed its PTQ.
The quantization setting are shown as follows:
#######################################################################################
quant_desc_input = QuantDescriptor(calib_method=‘histogram’, unsigned = True)
quant_desc_weight = QuantDescriptor(unsigned = True)
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantConv2d.set_default_quant_desc_weight(quant_desc_weight)
#######################################################################################
After calibration, I got the corresponding amax.
Then I wanted to test the performance, but I got the error like this:
#######################################################################################
File “/usr/lib/python3.7/runpy.py”, line 193, in _run_module_as_main
" main “, mod_spec)
File “/usr/lib/python3.7/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/home/tiger/.vscode-server/extensions/ms-python.python-2021.9.1246542782/pythonFiles/lib/python/debugpy/ main .py”, line 45, in
cli.main()
File “/home/tiger/.vscode-server/extensions/ms-python.python-2021.9.1246542782/pythonFiles/lib/python/debugpy/…/debugpy/server/cli.py”, line 444, in main
run()
File “/home/tiger/.vscode-server/extensions/ms-python.python-2021.9.1246542782/pythonFiles/lib/python/debugpy/…/debugpy/server/cli.py”, line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str(” main "))
File “/usr/lib/python3.7/runpy.py”, line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File “/usr/lib/python3.7/runpy.py”, line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File “/usr/lib/python3.7/runpy.py”, line 85, in run_code
exec(code, run_globals)
File “/home/tiger/llq/BasicSR-master/4_quantize_test_frame_rgb.py”, line 111, in
output = model(img_LR).data.squeeze().float().cpu().clamp (0, 1).numpy()
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 1051, in _call_impl
return forward_call(*input, **kwargs)
File “/home/tiger/llq/BasicSR-master/4_quantize_test_frame_rgb.py”, line 60, in forward
fea = self.conv_first(x)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 1051, in _call_impl
return forward_call(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/quant_conv.py”, line 119, in forward
quant_input, quant_weight = self._quant(input)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/quant_conv.py”, line 85, in _quant
quant_weight = self._weight_quantizer(self.weight)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 1051, in _call_impl
return forward_call(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/tensor_quantizer.py”, line 345, in forward
outputs = self._quant_forward(inputs)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/tensor_quantizer.py”, line 309, in _quant_forward
outputs = fake_tensor_quant(inputs, amax, self._num_bits, self._unsigned, self._narrow_range)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/tensor_quant.py”, line 305, in forward
outputs, scale = _tensor_quant(inputs, amax, num_bits, unsigned, narrow_range)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/tensor_quant.py”, line 326, in _tensor_quant
raise TypeError(“Negative values encountered in unsigned quantization.”)
TypeError: Negative values encountered in unsigned quantization.
#######################################################################################
My Quantization setting is
“ quant_desc_input = QuantDescriptor(num_bits=8, unsigned=True)
quant_desc_weight = QuantDescriptor(num_bits=8, unsigned=True)
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantConv2d.set_default_quant_desc_weight(quant_desc_weight)”
I have debuged the error, and found it happened error in the network forward process. But I checked the input, and found the input are all positive. I don’t know why?
Environment
TensorRT Version: 8.0.2
GPU Type: v100(32GB)
Nvidia Driver Version: 450.80.02
CUDA Version: 11.1
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.7.3
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0
Baremetal or Container (if container which image + tag):
Relevant Files
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered