Print details of classifier_meta_list meta?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
x86
• DeepStream Version
6.4
• JetPack Version (valid for Jetson only)
none
• TensorRT Version
8.6
• NVIDIA GPU Driver Version (valid for GPU only)
544
• Issue Type( questions, new requirements, bugs)
requirements
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
the official demo is here.

that is good,but it does NOT print the secondary model output info.
In the demo,there 4 “nvinfer” instances:
pgie:4-classes detector,in the demo(python script),it can print the object details,it’s good.
sgie1:classifier for vehicle maker,there’s no output info,it’s NOT good.
sgie2:classifier for vehicle type,there’s no output info,it’s NOT good.
sgie3:classifier for vehicle color,there’s no output info,it’s NOT good.

Could you please provide a sample code ,which can print all the secondary classifiers’ infor?
Thanks.

How do you print the classifier info?

I have fixed it.thanks.
the key is secondary infer needs "classifier-async-mode=0“

    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number = frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            #### iter classifier info
            print("==== obj_label ",obj_meta.obj_label,
                  "unique_component_id ",obj_meta.unique_component_id ,
                  "class_id ",obj_meta.class_id ,
                  )
            l_classifier = obj_meta.classifier_meta_list
            l_user = obj_meta.obj_user_meta_list
            print("==== l_classifier ",l_classifier)
            print("==== l_user ",l_user)
            print("==== misc_obj_info ",obj_meta.misc_obj_info)
            if obj_meta.class_id>-1:
                obj_counter[obj_meta.class_id] += 1
            try:

                l_obj = l_obj.next
            except StopIteration:
                break
            while l_classifier is not None:
                try:
                    print("===== cvt classifier_meta ", l_classifier)
                    classifier_meta = pyds.NvDsClassifierMeta.cast(l_classifier.data)
                    print("===== num_labels ", classifier_meta.num_labels)
                    l_label = classifier_meta.label_info_list
                    while l_label is not None:
                        try:
                            # Casting l_obj.data to pyds.NvDsObjectMeta
                            label_meta = pyds.NvDsLabelInfo.cast(l_label.data)
                            print("=======  label num_classes", label_meta.num_classes)
                            print("=======  label result_label", label_meta.result_label)
                            print("=======  label result_class_id", label_meta.result_class_id)
                            print("=======  label label_id", label_meta.label_id)
                            print("=======  label result_prob", label_meta.result_prob)

                        except StopIteration:
                            break
                        try:
                            l_label = l_label.next
                        except StopIteration:
                            break
                except StopIteration:
                    break
                try:
                    l_classifier = l_classifier.next
                except StopIteration:
                    break
        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(
            frame_number,
            num_rects,
            obj_counter[PGIE_CLASS_ID_VEHICLE],
            obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame = l_frame.next
        except StopIteration:
            break

Glad to hear that. If there are other questions, you can file a new topic.

one more thing,in my scenario,there are 9 vehicle are detected,but
only one vehicle is procressed with the sceondary ‘infer’ instance (vehicle make classifier).
Could you help to fix it?

the log is

ssh://root@localhost:4422/usr/bin/python3 -u /root/host_dir/Documents/workbench__/deepstream_python_apps/apps/deepstream-test2/deepstream_test_1_primary_secondary_infer_1image.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.jpg
Creating Pipeline 

Creating Source 

Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.jpg 
Adding elements to Pipeline 

Linking elements in the Pipeline 

/root/host_dir/Documents/workbench__/deepstream_python_apps/apps/deepstream-test2/deepstream_test_1_primary_secondary_infer_1image.py:350: DeprecationWarning: Gst.Element.get_request_pad is deprecated
 sinkpad = streammux.get_request_pad("sink_0")
Starting pipeline 

Running in WSL
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
0:00:09.383436249  2705 0x556d2ee2df80 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 20x1x1          

0:00:09.568054995  2705 0x556d2ee2df80 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Secondary_VehicleMake/resnet18_vehiclemakenet.etlt_b16_gpu0_int8.engine
0:00:09.597006703  2705 0x556d2ee2df80 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:mod_dstest2_sgie1_config.txt sucessfully
0:00:09.598867604  2705 0x556d2ee2df80 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1243> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
0:00:16.180235580  2705 0x556d2ee2df80 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:16.363474287  2705 0x556d2ee2df80 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
0:00:16.411858766  2705 0x556d2ee2df80 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:mod_dstest2_pgie_config.txt sucessfully
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STREAM_STATUS of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STREAM_STATUS of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STREAM_STATUS of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STREAM_STATUS of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STREAM_STATUS of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STREAM_STATUS of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STREAM_STATUS of type Gst.MessageType>
nvstreammux: Successfully handled EOS for source_id=0
==== obj_label  person unique_component_id  1 class_id  2
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  person unique_component_id  1 class_id  2
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  person unique_component_id  1 class_id  2
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  person unique_component_id  1 class_id  2
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  person unique_component_id  1 class_id  2
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  <pyds.GList object at 0x7f9eb9938cf0>
==== l_user  None
==== misc_obj_info  [0 0 0 0]
===== cvt classifier_meta  <pyds.GList object at 0x7f9eb9938cf0>
===== num_labels  1
=======  label num_classes 0
=======  label result_label ford
=======  label result_class_id 6
=======  label label_id 0
=======  label result_prob 0.7562492489814758
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
==== obj_label  car unique_component_id  1 class_id  0
==== l_classifier  None
==== l_user  None
==== misc_obj_info  [0 0 0 0]
Frame Number=0 Number of Objects=14 Vehicle_count=9 Person_count=5
batch_user_meta_list %s None
BUS: type <flags GST_MESSAGE_STREAM_START of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_TAG of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_ASYNC_DONE of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_LATENCY of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_NEW_CLOCK of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_STATE_CHANGED of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_ELEMENT of type Gst.MessageType>
BUS: type <flags GST_MESSAGE_EOS of type Gst.MessageType>
End-of-stream
End-of-stream


Process finished with exit code 0

Could you try to tune some parameters of the sgie, like classifier-threshold?

yes, there are some filters .
the threshold ,min-width,min-height shoud be set up well.

You mean after you set up the threshold ,min-width,min-height parameters, there’s still a problem?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.