But when i change the OCR model to Japanese recognition model, it has an error.
line 91, in probe_fn
text = label_info.result_label
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 0: unexpected end of data
I have researched some method, such as decode the label_info.result_label with another encoding but it doesn’t work because when i call label_info it doesn’t has any error, then call label_info.result_label it raise error already and i cant assign it to any varriable for doing stuff after.
So the question is:
Does i have another method to get the text in the probe function without these error?
Can i apply another encoding with label_info.result_labelwithout these error above?
from the error, it seemed to be a python issue. will print(label_info.result_label) output error? can you save the label_info.result_label to a local file and check by thirdpart tool?
as you know, deepstream sdk is a C code, and python code uses deepstream sdk by python binding.
nvinferserver plugin is opensource in DS6.2. wondering if it is right is C code, can you try deepstream6.2? you can print label_info->result_label in attachClassificationMetadata to check. the path is /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinferserver/, especially please rebuild the code and copy so to /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_inferserver.so.
or can you provide the whole project including configuration file and model by the forum email? we will have a try.
A little more question, can i use Adding metadata to the plugin before Gst-nvstreammux () to add the result text (from my postprocess model of ensemble model for example), and then catch it in the probe callback function?
yes, please refer to sample opt\nvidia\deepstream\deepstream-6.2\sources\apps\sample_apps\deepstream-gst-metadata-test\deepstream_gst_metadata.c, if you add usermeta in upstream plugin, then you can get this usermeta in downsream plugin.
after testing the code, there will be an error, can you fix it ?
E0725 14:36:40.416195 22261 model_repository_manager.cc:546] failed to load model ‘text_det_ver3’: at least one version must be available under the version policy of model ‘text_det_ver3’
E0725 14:36:40.416389 22261 model_repository_manager.cc:546] failed to load model ‘text_rec_ver3’: at least one version must be available under the version policy of model ‘text_rec_ver3’
E0725 14:36:40.416450 22261 model_repository_manager.cc:526] Invalid argument: ensemble ‘ocr_ver3’ depends on ‘text_rec_ver3’ which has no loaded version. Model ‘text_rec_ver3’ loading failed with error: at least one version must be available under the version policy of model ‘text_rec_ver3’
here is a license plate recognition sample for chinese car. if the prerequisites are not installed, the chinese characters can’t be displayed correctly.
For Chinese plate recognition, please make sure the OS supports Chinese language. please refer to link and itme2
Thank you for spending your time.
I have uploaded my fixed models. Please pull the repo and try again.
Also, the problem might be in the customparser models/ocr/custom_parser there is an c++ code in there for custom parser.
So i think that the point is when open the japanese dictionary std::ifstream fdict; setlocale(LC_CTYPE, ""); fdict.open("/opt/nvidia/deepstream/deepstream-6.1/surveillance_ai/models/ocr/jp_dict.txt");
the fdict cannot decode these symbol. Because when i print the text with std::cout << "text:" << attrString << " text_rects[k * 20 + l]: " << text_rects[k * 20 + l] + " " << std::endl;
it shown ???
Im very new with C++, so hope you can find a way to modify it from here.
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks
please refer to my last comment, please install the Japanese characters lib first.