Hello,
i’m facing a strange problem, which i can’t get solved.
Workflow:
-My colleague trained a model and has done an int8 calibration in python. The model has a Sigmoid activation function at the output (roughly said, it’s a u-net).
-He hands me the exported model as an *.uff file and a separate file with the calibration table.
-I successfully parsed the network with tensorrt in an c++ environment. Model works fine (float32 mode)
-When I activate int8 mode, I get an error message when calibration is done:
misc error nvinfer1::builder could not find scales for tensor conv22/Sigmoid_HL_41
I checked the calibration table file and noticed, this tensorhas a different name (conv22/Sigmoid_HL_)
-I renamed the tensor in the calibration file to (conv22/Sigmoid_HL_41)
-Parsing, calibration and inference works like charm in my cpp/tensorrt environment
So thats the first part of the Problem. Why differs the tensor’s name between my parsed model and the calibration table my colleague generated?
-I tested some more things and noticed the following:
When I tried to parse the model a second time while the programm is not closed between (with the manual modified calibration table) I run into the same error like above. Tensorrt now expects another tensor name than “conv22/Sigmoid_HL_41” (now: conv22/Sigmoid_HL_18467)
-I noticed tensor name when parsing second time within runtime is always “conv22/Sigmoid_HL_18467”. Regardless the model, *.uff file and calibration table.
Why the tensor name varies within runtime of program? How can I prevent this and get a constant name for the tensor?
Thanks for your replys, which hopefully can help me :)
Edit: I’m using TensorRT 5.1.5.0