Getting different-different classifier result in C and python application in DS-5.0

Hi,

Hardware and Software Details :
Device : Jetson NX-Xavier.
Package: nvidia-jetpack
Version: 4.4.1-b50
Architecture: arm64
TensorRT : 7.1.3-1+cuda10.2
Deepstream : 5.0

Problem :

I am using one C application GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream in this application I am able to get the classifier result by using ```
g_print (“Plate License %s\n”,label_info->result_label)

**Output : Plate License 113BJ375** 

But with python application I am getting 

Plate License 

    [ 910445127 875647826 56 0 0 0
    0 0 0 0 0 0
    0 0 0 0 0 0
    0 0 0 0 0 0
    0 0 0 0 0 0
    0 0 0 0 0 1
    0 1065242403 0 65 0 877607360
    127 875747280 127 0 0 0
    0 0 0 0 0 64
    0 164 0 130 0 631995120
    0 0 0 0 0 139
    0 60 0 0 1072693248 0
    0 140 0 40 0 0
    0 0 0 141 0 44
    0 858303104 101 0 0 0
    0 0 0 160 0 1029
    0 0 0 0 0 0
    0 0 0 0 0 0
    0 632937808 0 1 0 0
    0 -1924651544 127 0 0 -1924651120
    127 0 0 0 0 48641
    0 -1]

by using print("Plate License \n",label_info->result_label)


Can anyone suggest me that how can I convert this array of 128 length into the output that is given by C code (Plate License 113BJ375).? and if it is not possible then what is the way to get the expected result as given by the C code.


Thanks.

Same as Getting Issue in extracting classifier label in deepstream-imagedata-multistream python test application in DS-5 , let’s track it on that topic.