Python sample of chain a detection and classification models

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Nano 2G.
• DeepStream Version
6
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I’ve retrained a detectnet_v2 model for detect 4 classes of objects with proprierary dataset, but noticed 2 of the classes: bicycle and eletric-bicycle can’t be well seperated during the inference (the worst is like a Ground Truth Bicycle were labeled both bicycle and electric-bicycle , vice versa).
I’ve then retrained a 2 class classification model for them, hope it can improve the accuracy from the detection result.
I was using Deepstream Python app, by followed the test4, I was able to upload all detected objects to Kafka, but confused now for how to chain the classification model for only validate on bicycle and eletric-bicycle, and trust the classification result for uploading.

Looked the deepstream-test2, questions:

  1. Do I need add the tracker in pipeline?
  2. How to tell the pipeline only validating the specified 2 classes in classification model?
  3. How to extract classify result?
    the sgie 1-3 in test2 are for further detect car’s color, make, type, but I didn’t see these info extracted in function: osd_sink_pad_buffer_probe.
  4. Does the Jetson Nano 2G can support the 2 models pipeline?
    I’m using a 8M file size pruned detection model with interval=7 for achieve FPS 24. By checking top, column: RES, my detection python app (based on test4) used 1.1G memory, and now only 32M memory are free.

Hi,

1. Usually, we use a tracker to generate the intermediate bounding box so you don’t need to apply the inference to every frame.
Since Nano 2GiB is relatively resource-limited, enabling a tracker should be a good choice.

2. In the nvinfer configure, there is a filter-out-class-ids to filter out the unwanted class.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications

3. Please try the following changes:

diff --git a/apps/deepstream-test2/deepstream_test_2.py b/apps/deepstream-test2/deepstream_test_2.py
index 91970fa..f056c4c 100644
--- a/apps/deepstream-test2/deepstream_test_2.py
+++ b/apps/deepstream-test2/deepstream_test_2.py
@@ -76,6 +76,32 @@ def osd_sink_pad_buffer_probe(pad,info,u_data):
                 obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
             except StopIteration:
                 break
+
+            c_obj = obj_meta.classifier_meta_list
+            while c_obj is not None:
+                try:
+                    cls_meta=pyds.NvDsClassifierMeta.cast(c_obj.data)
+                except StopIteration:
+                    break
+
+                label=cls_meta.label_info_list
+                while label is not None:
+                    try:
+                        l=pyds.NvDsLabelInfo.cast(label.data)
+                    except StopIteration:
+                        break
+                    print('result_class_id=%d, result_label=%s, result_prob=%.3f' \
+                            %(l.result_class_id, l.result_label, l.result_prob))
+                    try:
+                        label=label.next
+                    except StopIteration:
+                        break
+
+                try:
+                    c_obj=c_obj.next
+                except StopIteration:
+                    break
+
             obj_counter[obj_meta.class_id] += 1
             try: 
                 l_obj=l_obj.next

We can get the secondary nvinfer output as below:

...
Frame Number=50 Number of Objects=12 Vehicle_count=9 Person_count=3
result_class_id=9, result_label=silver, result_prob=0.561
result_class_id=14, result_label=mazda, result_prob=0.632
result_class_id=0, result_label=coupe, result_prob=0.724
result_class_id=9, result_label=silver, result_prob=0.688
result_class_id=2, result_label=sedan, result_prob=0.567
result_class_id=10, result_label=white, result_prob=0.531
result_class_id=7, result_label=gmc, result_prob=0.684
result_class_id=10, result_label=white, result_prob=0.963
result_class_id=16, result_label=nissan, result_prob=0.977
result_class_id=2, result_label=sedan, result_prob=0.827
result_class_id=5, result_label=grey, result_prob=0.895
result_class_id=4, result_label=truck, result_prob=0.565
...

4.
Based on your status, is the pipeline with two models working already?

Please noted that some libraries are required to deploy an inference task.
This will take some memory as well as some workspace to deploy a model.

So you may find that Nano 2GiB takes most of the resources to launch an inference pipeline.

Thanks

Not yet, I just trained the 2 models, and just deployed the first detection model on my JetsonNano 2G which is running well. The more important is I need to deploy another extra JAVA application on board which may take another 100-200M memory, so according from your suggestion, the 2 models pipeline should highly unlikely can running on board since for current
single model pipeline, less than 100M free memory were there, correct?

One more question, the Uploading Msg to Kafka seems a trivial task in Python that only need import KafkaProducer and several line of code to accomplish , why in Deepstream Python App - test 4 choose to use gstreamer pipeline style which introduced the extra steps like: build librdkafka*.so plugin and extra cfg_kafka.txt file, is there any specific reasons?

Hi,

I would suggest giving it a try.
If you deploy both models with the Deepstream pipeline and with the same process.
The memory used for loading libraries can be shared.

For Kafka question:
Since Deespstream is a GStreamer-based SDK, the implementation needs to be GStreamer compatible.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.