I have integrate Facenet model with Peoplenet TLT model using deepstream-test2 Python sample app.
I used Peoplenet as primary detector and Facenet as secondary inference and removed the other two classifiers.
The app is running fine with no errors. But I wouldn’t able to parse the output tensor for Facenet. All the time l_user = obj_meta.obj_user_meta_list is None. I have enabled output tensor meta in the config file output-tensor-meta=1.
The prob I have added it on nvvidconv sink pad.
I didn’t change anything of the tracking config
This is the configuration file I’m using for Facenet.
def sgie_sink_pad_buffer_probe(pad,info,u_data):
frame_number=0
#Intiallizing object counter with 0.
obj_counter = {
PGIE_CLASS_ID_PERSON:0,
PGIE_CLASS_ID_FACE:0,
}
num_rects=0
gst_buffer = info.get_buffer()
if not gst_buffer:
print("Unable to get GstBuffer ")
return
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
try:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
except StopIteration:
break
frame_number=frame_meta.frame_num
num_rects = frame_meta.num_obj_meta
l_user = frame_meta.frame_user_meta_list
print('frame_meta.frame_user_meta_list is: ', l_user)
l_obj=frame_meta.obj_meta_list
while l_obj is not None:
try:
# Casting l_obj.data to pyds.NvDsObjectMeta
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
except StopIteration:
break
obj_counter[obj_meta.class_id] += 1
l_user = obj_meta.obj_user_meta_list
print(f'obj_meta.obj_user_meta_list {l_user}')
try:
l_obj=l_obj.next
except StopIteration:
break
try:
l_frame=l_frame.next
except StopIteration:
break
return Gst.PadProbeReturn.OK
I have tried to set process-mode=1 to act as primary and classify whole frame. just to make sure Facenet is working. It worked and I was able to read the tensor output from l_user = frame_meta.frame_user_meta_list and get the output layer Bottleneck_BatchNorm/batchnorm_1/add_1:0. But not able to get it on secondary mode.
• Hardware Platform (Jetson / GPU)
Jetson NX • DeepStream Version
5.0 • JetPack Version (valid for Jetson only)
4.4.1 • TensorRT Version
7.1.3 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs)
Since “output-tensor-meta=1” means nvinfer will not use the default post-processing inside nvinfer, so no object will be insert into the meta. It will export the model input and output layers and need you to add the customized post-processing by yourself to generate correct object meta for the output. deepstream_python_apps/apps/deepstream-ssd-parser at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub is the proper example of how to use “output-tensor-meta=1”.
For more information, you can refer to the C/C++ samples deepstream-infer-tensor-meta-test.
This is exactly what I am trying to accomplish. I have followed both python deepstream-ssd-parser sample and the C++ sample deepstream-infer-tensor-meta-test. They helped me a lot. But, for some reason l_user = obj_meta.obj_user_meta_list is always None
From the docs:
When operating as secondary GIE, NvDsInferTensorMeta is attached to each each NvDsObjectMeta object’s obj_user_meta_list.
So I am trying to access the the obj_meta.obj_user_meta_list to get access to NvDsInferTensorMeta But obj_meta.obj_user_meta_list is always None.
Since I used “output-tensor-meta=1” and make it secondary with “process-mode=2” in classifier config file, then, obj_meta.obj_user_meta_list should be not None and contains user meta list that I will iterate over to search SGIE’s tensor data, is this right?
I have tried several times to change the config file parameters and change the position of the prob function without any success.
Image below show printing obj_meta.obj_user_meta_list value.
Can the c/c++ sample deepstream-infer-tensor-meta-test work in your platform?
We can not get the reason of the failure by your description. You need to provide the codes, configuration files and samples to make us to reproduce your problem.
Ok sure, since all things I’m working on is open source, I will provide all the code.
I think I will provide a Github repository with code and steps, is Github better or you prefer uploading all the files here?
There is no problem with your codes. I can get obj_user_meta with your codes.
For test purpose, please remove tracker from the pipeline.
I tested with /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 and changed the SGIE to our prebuilt model.
My changed code. deepstream_test_2.py.txt (15.5 KB) default_sgie_config.txt (3.5 KB)
This is from my log which removed the tracker.
Frame Number=98 Number of Objects=41 Person_count=37 Face_count=0
obj_meta.obj_user_meta_list None
obj_meta.obj_user_meta_list None
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192bf80>
obj_meta.obj_user_meta_list None
obj_meta.obj_user_meta_list None
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f308>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f500>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f260>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f298>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f928>
obj_meta.obj_user_meta_list None
obj_meta.obj_user_meta_list None
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f0a0>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f8b8>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f2d0>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192fa08>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f6c0>
obj_meta.obj_user_meta_list <pyds.GList object at 0x7fa37192f7a0>
Thanks a lot.
I have tried your config file and made the changes on deepstream_test_2.py, removed tracker. It worked on the Nvidia pre built model But, not on Facenet model.
I did some experiments today and noticed that when changing these properties in SGIE
Since I got some lists. this means that when both was 160, the engine was receiving images with less than 160*160 so, it ignore them.
seems it height and width problem. how to make cropped objects from PGIE get mapped to the needed SGIE infer-dims size?
Jetson has limitation of scaling. It can just scale 1/16 to 16 in one dimension. Does your SGIE accept 160x160 RGB data? If so the input object minimum width and height should be 10 and 10.
What I did is this:
instead of giving the .plan file in model-engine-file, I used onnx-file and give it an onnx file with dynamic batch size.
I set batch-size=16 and removed input-object-min-width and input-object-min-height.
Now it accepts all image sizes, I got all obj_meta.obj_user_meta_list and runs fast on Jetson NX.
@Fiona.Chen I was able to do the custom processing needed on the tensor metadata successfully and now I need to attach the label to the metadata. But, It gave me error.
I was trying to convert code from the c++ sample deepstream-infer-tensor-meta-test example to Python
Code to save label to metadata
# Generate classifer metadata and attach to obj_meta
# Get NvDsClassifierMeta object
classifier_meta = pyds.nvds_acquire_classifier_meta_from_pool(batch_meta)
# Populate classifier_meta data with prediction result
classifier_meta.unique_component_id = tensor_meta.unique_id
# Get NvDsLabelInfo object
label_info = pyds.nvds_acquire_label_info_meta_from_pool(batch_meta)
#result is string
label_info.result_label = result # ERROR
label_info.result_prob = 0
label_info.result_class_id = 0
pyds.nvds_add_label_info_meta_to_classifier(classifier_meta, label_info)
pyds.nvds_add_classifier_meta_to_object(obj_meta, classifier_meta)
print(obj_meta.text_params.display_text)
display_text = obj_meta.text_params.display_text
obj_meta.text_params.display_text = f'{display_text} {result}'
Error:
Traceback (most recent call last):
File "deepstream_test_1.py", line 231, in sgie_sink_pad_buffer_probe
label_info.result_label = result
TypeError: (): incompatible function arguments. The following argument types are supported:
1. (arg0: pyds.NvDsLabelInfo) -> None
Invoked with: <pyds.NvDsLabelInfo object at 0x7f54b0c650>, 'Anas'
seems result_label not able to be modified or replaced. I tried to use label_info.pResult_label it accept a string but when trying to print it giving me this error
Traceback (most recent call last):
File "deepstream_test_1.py", line 233, in sgie_sink_pad_buffer_probe
print(label_info.pResult_label)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf7 in position 1: invalid start byte
So how I can store the predicted name in the label_info.result_label or label_info.pResult_label, Or if there any other valid way in python to store predicted name in the metadata.
I don’t have experience with C nor C++, your help is very much appreciated.