UFFParser: Parser error: dense_7/kernel: Invalid weights types when converted. Trying to convert from FP32 To INT8
I thought this maybe a problem with my model but I have tried many tensorflow frozen graphs (*.pb) and converted them with convert-to-uff and encountered the same issue in all. However, FLOAT and HALF precision inference work without a problem though.
You’ll need to calibrate the int8 input first. INT8 engines are built from 32-bit network definitions and require significantly more investment than building a 32-bit or 16-bit engine. In particular, the TensorRT builder must perform a process called calibration to determine how best to represent the weights and activations as 8-bit integers.
Yes, I understand that but the problem does not occur during inference but loading of the UFF model.
I am actually following the sampleInt8API example which avoids uses of the calibration step.
Parsing the UFF model fails before the code setting layer precision etcs… I also tried moving that part of the code before the network is parsed to no avail.