Deepstream's metadata label encoding

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU: GTX 1660 Super)
• DeepStream Version: 6.1.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (11.6)
• Issue Type (bugs)

In my pipeline, i have an element nvinferserver, which is an ensemble of preprocess, infer(PaddleOCR text recognition), postprocess OCR model.

And in my probe function, i want to get the output text that comes out from nvinferserver, because i am using OCR model.

When i am using English OCR model, using following code. Everything works fine.

gst_buffer = info.get_buffer()
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
frame_meta = pyds.NvDsFrameMeta.cast(
l_obj = frame_meta.obj_meta_list
obj_meta = pyds.NvDsObjectMeta.cast(
l_classifier = obj_meta.classifier_meta_list
l_label = classifier_meta.label_info_list
label_info = pyds.glist_get_nvds_label_info(
text_confidence = round(label_info.result_prob, 2)
text = label_info.result_label     # text: hello

But when i change the OCR model to Japanese recognition model, it has an error.

line 91, in probe_fn
    text = label_info.result_label
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 0: unexpected end of data

I have researched some method, such as decode the label_info.result_label with another encoding but it doesn’t work because when i call label_info it doesn’t has any error, then call label_info.result_label it raise error already and i cant assign it to any varriable for doing stuff after.

So the question is:

  1. Does i have another method to get the text in the probe function without these error?
  2. Can i apply another encoding with label_info.result_labelwithout these error above?


from the error, it seemed to be a python issue. will print(label_info.result_label) output error? can you save the label_info.result_label to a local file and check by thirdpart tool?

No sir. Any time when i call label_info.result_label, it error. even print it.

with open("Output.txt", "a", encoding="utf-8") as text_file:
    text_file.write(label_info.result_label + "\n")

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 0: invalid continuation byte

For more information, in the postprocess model of ensemble model, it is a Python model, i still can get the output text in utf-8 as expected.

def execute(self, requests):

    responses = []

    for request in requests:
        # Get INPUT0
        preds = pb_utils.get_input_tensor_by_name(
            request, "OUTPUT_INFER_RECS"

        indexs = pb_utils.get_input_tensor_by_name(
            request, "INDEXS"

        rec_result = self.postprocess_op(preds)
        text_recs = np.array([x[0] for x in rec_result])
        print(">> text_recs:", text_recs) # it show: >> text_recs: ['鈴木']

dose this web help?

You mean this? encoding = "ISO-8859-1"
I can not do that, since when i assign it to text, it error already.

  1. as you know, deepstream sdk is a C code, and python code uses deepstream sdk by python binding.
    nvinferserver plugin is opensource in DS6.2. wondering if it is right is C code, can you try deepstream6.2? you can print label_info->result_label in attachClassificationMetadata to check. the path is /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinferserver/, especially please rebuild the code and copy so to /opt/nvidia/deepstream/deepstream/lib/gst-plugins/
  2. or can you provide the whole project including configuration file and model by the forum email? we will have a try.
1 Like

Thanks, i will give it a try.

A little more question, can i use Adding metadata to the plugin before Gst-nvstreammux () to add the result text (from my postprocess model of ensemble model for example), and then catch it in the probe callback function?

yes, please refer to sample opt\nvidia\deepstream\deepstream-6.2\sources\apps\sample_apps\deepstream-gst-metadata-test\deepstream_gst_metadata.c, if you add usermeta in upstream plugin, then you can get this usermeta in downsream plugin.

1 Like

@fanzh I think the best way is to send you my pipeline and models.
You can find it here: GitHub - hoanhvmetavi/deepstream_ocr: Deepstream OCR simple pipeline..
The error likely at line 70 print(">> Text:", text)

Glad to hear respon from you soon.

thanks for the update. I can’t connect that rtsp soruce, could you provoide a video recording? thanks!

after testing the code, there will be an error, can you fix it ?
E0725 14:36:40.416195 22261] failed to load model ‘text_det_ver3’: at least one version must be available under the version policy of model ‘text_det_ver3’
E0725 14:36:40.416389 22261] failed to load model ‘text_rec_ver3’: at least one version must be available under the version policy of model ‘text_rec_ver3’
E0725 14:36:40.416450 22261] Invalid argument: ensemble ‘ocr_ver3’ depends on ‘text_rec_ver3’ which has no loaded version. Model ‘text_rec_ver3’ loading failed with error: at least one version must be available under the version policy of model ‘text_rec_ver3’

here is a license plate recognition sample for chinese car. if the prerequisites are not installed, the chinese characters can’t be displayed correctly.
For Chinese plate recognition, please make sure the OS supports Chinese language. please refer to link and itme2

Thank you for spending your time.
I have uploaded my fixed models. Please pull the repo and try again.

Also, the problem might be in the customparser models/ocr/custom_parser there is an c++ code in there for custom parser.

So i think that the point is when open the japanese dictionary
std::ifstream fdict; setlocale(LC_CTYPE, "");"/opt/nvidia/deepstream/deepstream-6.1/surveillance_ai/models/ocr/jp_dict.txt");
the fdict cannot decode these symbol. Because when i print the text with
std::cout << "text:" << attrString << " text_rects[k * 20 + l]: " << text_rects[k * 20 + l] + " " << std::endl;
it shown ???

Im very new with C++, so hope you can find a way to modify it from here.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

please refer to my last comment, please install the Japanese characters lib first.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.