In my int8 calibration I use images which have a range of roughly 0 to 500 and discretization of ~0.1 After my int8 calibration I get very poor results. Can this be due to the high value range of my input?

500 / 0.1 = 5000 this is way more what you have if use int8 images as input ~ 256.

Well, you can’t just simply squeeze 5000 discrete states into 8 bits (= 256 discrete states) and expect great results. Maybe some pre-processing could help compress the data more intelligently. In general, the “best” method of lossy compression will be highly dependent on the use case.

One algorithm I am aware of that compresses 14-bit samples into 8 bits is mu-law (https://en.wikipedia.org/wiki/%CE%9C-law_algorithm), but I have absolutely no idea whether this algorithm originally used for compressing audio data would be suitable for image processing at all.

Thanks for your reply njuffa. I get your point. This is why I’m asking. But my loss in quality of the output of the int8-calibrated cnn is so bad that I cannot recognize almost anything of what I expect to see.

I can only see repeating pattern, which I expect from then input.

Does anyone have the opinion that the results should not be that bad?
Is it reasonable that I cannot see any good results?
Can one estimate or verify that using that huge range necessarily leads to such bad results/outcome?

I don’t have any experience with CNNs applied to image processing, maybe some other participants in these forums can clue us in as to what the “normal” way of dealing with this scenario is.

A quick review of recent literature does suggest that images with high dynamic range are compressed down to 8 bits for use with CNNs, e.g.

I note that mu-law is one kind of exponential compression algorithm.