TensorRT 3: INT8 on GTX 1060

I’m trying to create a tensorrt engine from a UFF model converted from simple VGG19 from Tensorflow. It works great on datatype=trt.infer.DataType.FP32, but throws the following error on datatype=trt.infer.DataType.INT8

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-13-116d08460467> in <module>()
      6                                      max_batch_size=1,
      7                                      max_workspace_size= 1 << 30,
----> 8                                      datatype=trt.infer.DataType.INT8) 
      9 
     10 context = engine.create_execution_context()

/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py in uff_to_trt_engine(logger, stream, parser, max_batch_size, max_workspace_size, datatype, plugin_factory, calibrator)
    180     if datatype == infer.DataType.INT8 and calibrator == None:
    181         logger.log(tensorrt.infer.LogSeverity.ERROR, "Specified INT8 but no calibrator provided")
--> 182         raise AttributeError("Specified INT8 but no calibrator provided")
    183 
    184     try:

AttributeError: Specified INT8 but no calibrator provided

The only similar issue that I found is https://devtalk.nvidia.com/default/topic/1025899/jetson-tx2/tensorrt-and-tensorflow-convert-to-uff-failed/2#. But it occured on Jetson there, and I have GTX 1060 with a supported architecture.

Should I manually provide calibrator for this somehow?