Hello this is Akash Singh, I have been using deepstream fro a year now and currently facing a problem.
**• Issue Type: NVIDIA-AI-IOT / deepstream_lpr_app is not working when only using LPD and LPR model but working when using trafficamnet model + LPD and LPR model
Below is the Hardware specification that i am using:
**• Hardware Platform (Jetson / GPU): Jetson Nano
**• DeepStream Version: 6.0.1
**• JetPack Version (valid for Jetson only): 4.6.3 [L4T 32.7.3]
**• TensorRT Version: 8.2.1.9
**• CUDA Version: 10.2.300
**• cuDNN Version: 8.2.1.32
**• Python Version: 3.6.9
**• Model: NVIDIA Jetson Nano Developer Kit
I am using GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream for my project to detect CAR Number plate and recognize its number.
The problem i am facing is that when using trafficamnet model + LPD and LPR model all these three models i am able to extract ocr using thee probe function. But when using only LPD and LPR model the probe function is unable to return the ocr.
I have checked
l_class = obj_meta.classifier_meta_list ### l_class is empty when only using LPD and LPR model
### But contain ocr values when using trafficamnet model + LPD and LPR model In the following probe code:
def osd_sink_pad_buffer_probe(pad,info,u_data):
frame_number=0
num_rects=0
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
lp_dict = {}
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
# Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
# The casting is done by pyds.NvDsFrameMeta.cast()
# The casting also keeps ownership of the underlying memory
# in the C code, so the Python garbage collector will leave
# it alone.
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
'''
print("Frame Number is ", frame_meta.frame_num)
print("Source id is ", frame_meta.source_id)
print("Batch id is ", frame_meta.batch_id)
print("Source Frame Width ", frame_meta.source_frame_width)
print("Source Frame Height ", frame_meta.source_frame_height)
print("Num object meta ", frame_meta.num_obj_meta)
'''
frame_number=frame_meta.frame_num
l_obj=frame_meta.obj_meta_list
num_rects = frame_meta.num_obj_meta
while l_obj is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
if True:
#no ROI
l_class = obj_meta.classifier_meta_list ### l_class is empty when only using LPD and LPR model
### But contain ocr values when using trafficamnet model + LPD and LPR model
while l_class is not None:
try:
class_meta = pyds.NvDsClassifierMeta.cast(l_class.data)
except StopIteration:
break
l_label = class_meta.label_info_list
while l_label is not None:
try:
label_info = pyds.NvDsLabelInfo.cast(l_label.data)
except StopIteration:
break
print("Current OCR ",label_info.result_label)
try:
l_label=l_label.next
except StopIteration:
break
try:
l_class=l_class.next
except StopIteration:
break
The reason why i am using two models instead of 3 (LPR and LPD instead of trafficamnet + LPD and LPR) is to get more fps, reduce Ram , resources consumption and there are also other programs running along with deepstream app in my case.
Can you guide me through, what changes do i need to perform in probe function and lpr_parser program in order to get the ocr value (when using only LPD and LPR models).