Not Getting Correct output while running inference using TensorRT on LPRnet fp16 Model

I modified. But getting the same result as getting earlier.

[array([35, 35, 35, 35, 11, 35, 35, 10, 35, 28, 28, 35,  0, 35, 35,  0, 35,
       35,  2, 35, 35, 35, 35,  4], dtype=int32), array([1.        , 1.        , 1.        , 0.9995117 , 0.90478516,
       1.        , 0.9995117 , 0.94091797, 1.        , 0.9897461 ,
       0.99609375, 1.        , 0.9995117 , 0.83984375, 1.        ,
       1.        , 0.99072266, 1.        , 0.99853516, 1.        ,
       1.        , 1.        , 0.85302734, 1.        ], dtype=float32)]

What should I do next ?

No, you already get the result.
[35, 35, 35, 35, 11, 35, 35, 10, 35, 28, 28, 35, 0, 35, 35, 0, 35,
35, 2, 35, 35, 35, 35, 4]

Please do some coding for post-processing.

If user train with deepstream_tao_apps/us_lp_characters.txt at master ยท NVIDIA-AI-IOT/deepstream_tao_apps ยท GitHub , there are 35 classes.

The โ€œ35โ€ means how many classes are. It was a flag for blank_id.

The โ€œ11โ€ means character of โ€œBโ€
The โ€œ10โ€ means character of โ€œAโ€
The โ€œ28โ€ means character of โ€œTโ€

i.e.
character inference_output
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
A 10
B 11
C 12
D 13
E 14
F 15
G 16
H 17
I 18
J 19
K 20
L 21
M 22
N 23

P 24
Q 25
R 26
S 27
T 28
U 29
V 30
W 31
X 32
Y 33
Z 34

1 Like

Thank you so much @Morganh.

I will do the coding for the post processing.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.