I am currently working on a Variational Autoencoder. It is fully trained in fp32-precision.
Calibrating the entire network on int8 with the „IInt8EntropyCalibrator2“works great.
However, there are some small problems with the output-precision. The problem can be solved by a higher precision in the sampling layer (I checked that by using the int8 calibration and set the dynamic-range manually). Can do this in one step with a Calibrator?
My question or problem relates to the calibrator, whether it is capable of mixed-precision.
The Builder cannot build the engine with the calibrator and mixed-precision. The mixed-precision is done with the following Code:
Layer_with_high_precision=list(range(70,100))#Layer 70 up to 100 as float32-Precision for i in range(network.num_layers): #go through all the layers layer = network[i] print('Layer:'+str(i)+'\t Name:'+str(layer.name)+'\t\t Precision:'+str(layer.precision)+'\t Typ:'+str(layer.type)) if i in Layer_with_high_precision: '''specify the layer precision (want to run certain layers a specific precision) This gives the layer’s inputs and outputs a preferred type''' layer.precision = trt.float32 '''You can choose a differentpreferred type for an output of a layer using set the output tensor data type to conform with the layer implementation''' if i+1 not in Layer_with_high_precision:# if the Next-Layer is int8 for j in range(layer.num_outputs): layer.set_output_type(j, trt.int8) else: layer.precision=trt.int8 if i+1 in Layer_with_high_precision: # if the Next-Layer is float32 for j in range(layer.num_outputs): layer.set_output_type(j, trt.float32) builder.strict_type_constraints = True
Do I have to do the calibration myself by hand?
Can the calibrator also calibrate mixed precision or is this not possible?
The documentation is not clear about that.