Negative values encountered in unsigned quantization

Description

I use the Trt8.0 quantization function in my model. I just emplyed its PTQ.
The quantization setting are shown as follows:
#######################################################################################
quant_desc_input = QuantDescriptor(calib_method=‘histogram’, unsigned = True)
quant_desc_weight = QuantDescriptor(unsigned = True)
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantConv2d.set_default_quant_desc_weight(quant_desc_weight)
#######################################################################################
After calibration, I got the corresponding amax.
Then I wanted to test the performance, but I got the error like this:
#######################################################################################
File “/usr/lib/python3.7/runpy.py”, line 193, in _run_module_as_main
" main “, mod_spec)
File “/usr/lib/python3.7/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/home/tiger/.vscode-server/extensions/ms-python.python-2021.9.1246542782/pythonFiles/lib/python/debugpy/ main .py”, line 45, in
cli.main()
File “/home/tiger/.vscode-server/extensions/ms-python.python-2021.9.1246542782/pythonFiles/lib/python/debugpy/…/debugpy/server/cli.py”, line 444, in main
run()
File “/home/tiger/.vscode-server/extensions/ms-python.python-2021.9.1246542782/pythonFiles/lib/python/debugpy/…/debugpy/server/cli.py”, line 285, in run_file
runpy.run_path(target_as_str, run_name=compat.force_str(” main "))
File “/usr/lib/python3.7/runpy.py”, line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File “/usr/lib/python3.7/runpy.py”, line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File “/usr/lib/python3.7/runpy.py”, line 85, in run_code
exec(code, run_globals)
File “/home/tiger/llq/BasicSR-master/4_quantize_test_frame_rgb.py”, line 111, in
output = model(img_LR).data.squeeze().float().cpu().clamp
(0, 1).numpy()
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 1051, in _call_impl
return forward_call(*input, **kwargs)
File “/home/tiger/llq/BasicSR-master/4_quantize_test_frame_rgb.py”, line 60, in forward
fea = self.conv_first(x)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 1051, in _call_impl
return forward_call(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/quant_conv.py”, line 119, in forward
quant_input, quant_weight = self._quant(input)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/quant_conv.py”, line 85, in _quant
quant_weight = self._weight_quantizer(self.weight)
File “/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py”, line 1051, in _call_impl
return forward_call(*input, **kwargs)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/tensor_quantizer.py”, line 345, in forward
outputs = self._quant_forward(inputs)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/nn/modules/tensor_quantizer.py”, line 309, in _quant_forward
outputs = fake_tensor_quant(inputs, amax, self._num_bits, self._unsigned, self._narrow_range)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/tensor_quant.py”, line 305, in forward
outputs, scale = _tensor_quant(inputs, amax, num_bits, unsigned, narrow_range)
File “/usr/local/lib/python3.7/dist-packages/pytorch_quantization-2.1.0-py3.7-linux-x86_64.egg/pytorch_quantization/tensor_quant.py”, line 326, in _tensor_quant
raise TypeError(“Negative values encountered in unsigned quantization.”)
TypeError: Negative values encountered in unsigned quantization.
#######################################################################################

My Quantization setting is
“ quant_desc_input = QuantDescriptor(num_bits=8, unsigned=True)
quant_desc_weight = QuantDescriptor(num_bits=8, unsigned=True)
quant_nn.QuantConv2d.set_default_quant_desc_input(quant_desc_input)
quant_nn.QuantConv2d.set_default_quant_desc_weight(quant_desc_weight)”

I have debuged the error, and found it happened error in the network forward process. But I checked the input, and found the input are all positive. I don’t know why?

Environment

TensorRT Version: 8.0.2
GPU Type: v100(32GB)
Nvidia Driver Version: 450.80.02
CUDA Version: 11.1
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.7.3
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0
Baremetal or Container (if container which image + tag):

Relevant Files



Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi , UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

Hi, thanks for your reply.
However, I think it is the problem of the UFF and Caffe Parser, even the onnx parser.
In fact, I did not carry on that step, I just test the quantized network’s inference functionality in Pytorch.

It seems the pytorch-quantization tool does not support unsigned quantization well.
I have found the reason. The following is an example:
if a weight tensor have negtive values and positve values, and we use the unsigned quantization, the tool would call something like this:
“quant_weight = self._weight_quantizer(self.weight)”, and this line will call something like this:
“if self._if_quant: outputs = self._quant_forward(inputs)”, and this line will call:
“outputs = fake_tensor_quant(inputs, amax, self._num_bits, self._unsigned, self._narrow_range)”, and this will call:
def _tensor_quant(inputs, amax, num_bits=8, unsigned=False, narrow_range=True):
“”“Shared function body between TensorQuantFunction and FakeTensorQuantFunction”“”
# Fine scale, per channel scale will be handled by broadcasting, which could be tricky. Pop a warning.
if isinstance(amax, torch.Tensor) and inputs.dim() != amax.dim():
logging.debug(“amax %s has different shape than inputs %s. Make sure broadcast works as expected!”,
amax.size(), inputs.size())

logging.debug("{} bits quantization on shape {} tensor.".format(num_bits, inputs.size()))

if unsigned:
    if inputs.min() < 0.:
        raise TypeError("Negative values encountered in unsigned quantization.")

# Computation must be in FP32 to prevent potential over flow.
input_dtype = inputs.dtype
if inputs.dtype == torch.half:
    inputs = inputs.float()
if amax.dtype == torch.half:
    amax = amax.float()

min_amax = amax.min()
if min_amax < 0:
    raise ValueError("Negative values in amax")

max_bound = torch.tensor((2.0**(num_bits - 1 + int(unsigned))) - 1.0, device=amax.device)
if unsigned:
    min_bound = 0
elif narrow_range:
    min_bound = -max_bound
else:
    min_bound = -max_bound - 1
scale = max_bound / amax

epsilon = 1. / (1<<24)
if min_amax <= epsilon:  # Treat amax smaller than minimum representable of fp16 0
    zero_amax_mask = (amax <= epsilon)
    scale[zero_amax_mask] = 0  # Value quantized with amax=0 should all be 0

outputs = torch.clamp((inputs * scale).round_(), min_bound, max_bound)

if min_amax <= epsilon:
    scale[zero_amax_mask] = 1.  # Return 1 makes more sense for values quantized to 0 with amax=0

if input_dtype == torch.half:
    outputs = outputs.half()

return outputs, scale

but the input weight tensor have negative values. but it seems not right, if the tensor have negative values, then it cannot quantize
please

Hi,

We are looking into this issue, please allow us sometime to get back on this.

Thank you.

Hi @sjtu_llq,

Thank you for letting us know this. We will work on this issue.

Hi,

It is TensorRT design that we would throw an error when apply unsigned quant descriptor on negative weights.

Thank you.

Thank you for your reply
I know that
I think this should be fixed as the weights for the first layer must have negative values

Hi,

When the weights have negative values, let’s use signed quantization. else all negative values will clip to 0, we cannot guarantee the accuracy because of this clip error.

Thank you.