The obj_meta_list of sgie is None when i use custom parse bbox as input

**• Hardware Platform (Jetson / GPU) XAVIER NX
**• DeepStream Version 5.1
**• JetPack Version (valid for Jetson only) 4.4
**• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)

I want to integrate the paddleOCR in the jetson to get realtime recognize text,now i have export the paddleOCR detect model and recognize model as onnx model.
I set the det_model as the pgie and the rec_model as sgie.
i refer the demo of deepstream-ssd-parser(python) and deepstream-infer-tensor-meta-test (C++) to learn how to pass the resutl to the secondary,but i can’t get the the sgie resutl, the obj_meta.obj_user_meta_list is None

l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                # tensor_meta = pyds.NvDsInferTensorMeta.cast(obj_meta.obj_meta_data)
                print('class_id==',obj_meta.class_id)
                print('unique_component_id==',obj_meta.unique_component_id)
                l_user = obj_meta.obj_user_meta_list

                print ('l_user==',l_user)
                while l_user is not None:
                    try:
                        # Casting l_obj.data to pyds.NvDsObjectMeta
                        user_meta=pyds.NvDsUserMeta.cast(l_user.data)
                        tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
                        print(tensor_meta.unique_id)
                    except StopIteration:
                        break

                    try:
                        l_user = l_user.next
                    except StopIteration:
                        break

the l_user is none , i got these out print in the console:

class_id== 1
unique_component_id== 1
l_user== None

my pgie_config.txt as following

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373

model-engine-file=./model/det.engine
#model-color-format=2
#force-implicit-batch-dim=1
infer-dims=3;640;640
#batch-size=1
process-mode=1


network-mode=1
num-detected-classes=1
interval=1
gie-unique-id=1
network-type=100
output-tensor-meta=1

my sgie_config.txt as following

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#net-scale-factor=1


#force-implicit-batch-dim=1
model-file=./rec_model.onnx
model-engine-file=./model/rec.engine

gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=1

model-color-format=1
infer-dims=3;32;100
batch-size=1
process-mode=2

network-mode=1
interval=0

network-type=100
output-tensor-meta=1

I have found this topic,my problem is similar to this. but these is change to c++, does python can sovle?
https://forums.developer.nvidia.com/t/secondary-inference-using-nvinferserver-after-deepstream-ssd-parser/181773

Anyone can help me? thanks very much

Make sure you can run successfully using c++ version app after integrate your models firstly. and you can get sgie objects meta data.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.