Frame_meta.obj_meta_list returns None with classification model

• Hardware Platform GPU
• DeepStream Version 6.3
• TensorRT Version 8.6.1.6
• NVIDIA GPU Driver Version 510.73.08
• How to reproduce the issue ?
I’m deploying a classification model, but I’m encountering an issue where frame_meta.obj_meta_list returns None, which prevents me from obtaining the resulting class. Does anyone know what might be wrong or how to resolve this problem?

if (info.type & Gst.PadProbeType.BUFFER):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        logger.warning("Unable to get GstBuffer")
        return

    buffer_received_time = datetime.now().strftime('%H:%M:%S.%f')
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    l_frame = batch_meta.frame_meta_list
    cont = 0

    # Loop para percorrer cada frame
    while l_frame is not None:
        time_now = datetime.now()
        time.sleep(0.1)#Tempo de espera entre cada frame, estou usando para analisar as saidas

        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            #cls_meta=pyds.NvDsClassifierMeta.cast(frame_meta.data)
            #cls_meta_lbl = cls_meta.label_info_list
            #classifier = frame_meta.base_meta.batch_meta.classifier_meta_pool
            #print('frame_meta:',classifier)
            # label_info_list = pyds.NvDsLabelInfo.cast(classifier.label_info_list.data)
            #print('label_info_list.result_class_id: {}'.format(label_info_list.result_class_id ))
            #print('cls_meta.num_labels:',cls_meta.num_labels)
            # print('cls_meta_lbl')
            #cls_meta_lbl_info=pyds.NvDsLabelInfo.cast(cls_meta_lbl.data)
            #result_str = str(cls_meta_lbl_info.result_label)#.tobytes().decode('iso-8859-1')) 
            #print("result_decode:", result_str)
            
        except Exception as e: #StopIteration 
            break

        #print(help(frame_meta))

        #No primeiro frame pega altura e largura e atualiza o tempo da ultima deteccao para nao entrar em stop line
        if self.frame_width == 0:
            self.frame_width = frame_meta.source_frame_width
            self.frame_height = frame_meta.source_frame_height
            self.last_detection_moviment_timestamp = datetime.now()
            self.time_continuous_detection = 0
            print(frame_meta.source_frame_width, frame_meta.source_frame_height)
        #Display meta para adicionar textos na imagem
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        
        # Me remova
        self.cont_frame += 1
        
        #Caso esteja em parada de linha, adicionar mensagem na imagem                
        if self.stop_line_status:
            #Mensagem na imagem
            self.add_text_to_display_meta(display_meta, "PARADA DE LINHA", int(self.frame_width/2), int(self.frame_height/2),1,1, font_size=40)
        else:
            #Adicionando no frame linha central que serve para contagem, isso é só para visualização
            self.add_lines_to_display_meta(display_meta, self.line_count_x, 0, self.line_count_x, self.line_count_y)           

        #Objeto que contem as deteccoes 
        l_obj = frame_meta.obj_meta_list
        #print(l_obj.class_id)
        print("l_obj:",l_obj)
        print("frame_meta.num_obj_meta:",frame_meta.num_obj_meta)
        print("frame_meta.bInferDone:",frame_meta.bInferDone)

Result:

l_obj: None
frame_meta.num_obj_meta: 0
frame_meta.bInferDone: 0

**• Requirement details
link model and file labels: model and labels file
config file:
config_infer_primary_resnet18_vehicletypenet_v2.txt (1.0 KB)

Since you are using the classification model, you should use the NvDsClassifierMetaList to get what you want.

Thank you for answer. Since l_obj is None, it’s not possible to retrieve obj_meta, which means accessing its classifier_meta_list method is also not feasible.

if (info.type & Gst.PadProbeType.BUFFER):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        logger.warning("Unable to get GstBuffer")
        return

    buffer_received_time = datetime.now().strftime('%H:%M:%S.%f')
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

    l_frame = batch_meta.frame_meta_list
    cont = 0

    # Loop para percorrer cada frame
    while l_frame is not None:
        time_now = datetime.now()
        time.sleep(0.1)#Tempo de espera entre cada frame, estou usando para analisar as saidas

        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            #cls_meta=pyds.NvDsClassifierMeta.cast(frame_meta.data)
            #cls_meta_lbl = cls_meta.label_info_list
            #classifier = frame_meta.base_meta.batch_meta.classifier_meta_pool
            #print('frame_meta:',classifier)
            # label_info_list = pyds.NvDsLabelInfo.cast(classifier.label_info_list.data)
            #print('label_info_list.result_class_id: {}'.format(label_info_list.result_class_id ))
            #print('cls_meta.num_labels:',cls_meta.num_labels)
            # print('cls_meta_lbl')
            #cls_meta_lbl_info=pyds.NvDsLabelInfo.cast(cls_meta_lbl.data)
            #result_str = str(cls_meta_lbl_info.result_label)#.tobytes().decode('iso-8859-1')) 
            #print("result_decode:", result_str)
            
        except Exception as e: #StopIteration 
            break

        #print(help(frame_meta))

        #No primeiro frame pega altura e largura e atualiza o tempo da ultima deteccao para nao entrar em stop line
        if self.frame_width == 0:
            self.frame_width = frame_meta.source_frame_width
            self.frame_height = frame_meta.source_frame_height
            self.last_detection_moviment_timestamp = datetime.now()
            self.time_continuous_detection = 0
            print(frame_meta.source_frame_width, frame_meta.source_frame_height)
        #Display meta para adicionar textos na imagem
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        
        # Me remova
        self.cont_frame += 1
        
        #Caso esteja em parada de linha, adicionar mensagem na imagem                
        if self.stop_line_status:
            #Mensagem na imagem
            self.add_text_to_display_meta(display_meta, "PARADA DE LINHA", int(self.frame_width/2), int(self.frame_height/2),1,1, font_size=40)
        else:
            #Adicionando no frame linha central que serve para contagem, isso é só para visualização
            self.add_lines_to_display_meta(display_meta, self.line_count_x, 0, self.line_count_x, self.line_count_y)           

        #Objeto que contem as deteccoes 
        l_obj = frame_meta.obj_meta_list
        #print(l_obj.class_id)
        print("l_obj:",l_obj)
        print("frame_meta.num_obj_meta:",frame_meta.num_obj_meta)
        print("frame_meta.bInferDone:",frame_meta.bInferDone)
	obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
	print("obj_meta:",obj_meta)
	cls_obj = obj_meta.classifier_meta_list
	print("cls_obj:",cls_obj)

l_obj_NONE
Even when I use a yolov8m.onnx detector followed by the previously mentioned classifier, the result of classifier_meta_list still returns None.
cls_obj_None

I was able to obtain the classification result. However, since the classifier is secondary and requires tracking to be active, the detected object is only classified once.
Captura de tela de 2024-08-21 19-37-51
Captura de tela de 2024-08-21 19-36-34
Captura de tela de 2024-08-21 19-35-47
Is there a parameter that enables classification on every frame for that object?

The default is to classify on each frame. Could you try to use our deepstream-test2 to reproduce your issue?

Thank you for reply. I developed a cascaded model where the detection is first performed using YOLOv8, and then the classification of the detected object is carried out with ResNet18. However, after the initial classification, since the object is static and maintains the same tracking ID, the classification is not repeated.
Captura de tela de 2024-08-22 15-34-47
In my application, it is necessary to detect a static region where the classification should change based on the lighting condition: when the light is off, the region should belong to one class, and when the light is on, it should belong to another class. Because the region is static, the object’s ID does not change, preventing reclassification. For my application, it is crucial that this classification is performed continuously, even if the object remains the same. Is there a way to enable constant reclassification of the detected object?
Another question I have is how to apply the definition of an ROI (Region of Interest). Given that it’s a static region, instead of using a detector and a classifier in a cascade, would it be possible to simply define an ROI where the classification model could continuously perform the classification? How can I configure this ROI in the DeepStream pipeline?

Could you attach your whole pipeline currently?

Could you try to remove the tracker from your pipeline?
You can also try to tune the matchingScoreWeight4VisualSimilarity parameter of the tracker.

You can refer to our preprocess plugin.

The first two methods were unsuccessful. It was not possible to remove the tracker because, without it, the secondary model is unable to perform inference. Additionally, varying the values of matchingScoreWeight4VisualSimilarity did not yield good results, likely because it involves detecting a single, static region. The last suggested method will still be tested.

If you set the right operate-on-gie-id for the sgie and gie-unique-id for the pgie, it will perform inference normally without tracker.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.