In the “Entropy Calibration - pseudocode” , It is descripted Described as follows:
Input: FP32 histogram H with 2048 bins: bin[ 0 ], …, bin[ 2047 ]
For i in range( 128 , 2048 ):
reference_distribution_P = [ bin[ 0 ] , …, bin[ i-1 ] ] // take first ‘ i ‘ bins from H
outliers_count = sum( bin[ i ] , bin[ i+1 ] , … , bin[ 2047 ] )
reference_distribution_P[ i-1 ] += outliers_count
P /= sum(P) // normalize distribution P
candidate_distribution_Q = quantize [ bin[ 0 ], …, bin[ i-1 ] ] into 128 levels // explained later
expand candidate_distribution_Q to ‘ i ’ bins // explained later
Q /= sum(Q) // normalize distribution Q
divergence[ i ] = KL_divergence( reference_distribution_P, candidate_distribution_Q)
End For
Find index ‘m’ for which divergence[ m ] is minimal
threshold = ( m + 0.5 ) * ( width of a bin )[/b][/b]
How to form the candidate_distribution_Q and expand it to ‘ i ’? Can you give an example? For example: when i =
141, how can I expand Q[0~127] to Q’[0~141]?
Thank you very much!