How batch inference on secondary model works with output-tensor-meta enabled

• Hardware Platform (Jetson / GPU) - GPU
• DeepStream Version - 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version - 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) - 515.65.01
• Issue Type( questions, new requirements, bugs) - questions

Hi,

I have a pgie and an sgie in my pipeline. pgie is a face_detector model and sgie is an embedding model, which gives 128 dimensional vector. I enabled output-tensor-meta in the sgie config to access the output of the embed model. I do batch inference on secondary model. My question is, does the batch inference on sgie attach batched tensor meta or will it be an individual tensor meta for each detected object.

Thanks.

Hi @arivarasan.e
It attached the batched tensor meta, i.e, the tensor output of the network with batched inference.

Hi @mchi
In that case, does each object’s meta contain the batched user meta of all objects? Let’s say my primary detector detects 3 objects and the batch-size of the sgie is 4, does the batched inference of the sgie available in all 3 object’s meta?

Below is the code to access the user meta from obj_meta.

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
        l_frame = batch_meta.frame_meta_list

        while l_frame is not None:
            try:
                frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            except StopIteration:
                break

            frame_number = frame_meta.frame_num
            l_obj = frame_meta.obj_meta_list
            # print("Got frame ---------------------------in to the sgie_sinkpad")
            while l_obj is not None:
                try:
                    obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
                    # print("Got objmeta --------------------------------- in to the sgie_sinkpad")
                except StopIteration:
                    break                                                    
                l_user = obj_meta.obj_user_meta_list
         
                while l_user is not None:
                    try:
                        user_meta = pyds.NvDsUserMeta.cast(l_user.data)
                        # print("Got usemeta --------------------------------- in to the sgie_sinkpad")
                    except StopIteration:
                        break

                    if (
                        user_meta.base_meta.meta_type
                        != pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
                    ):
                        continue

                    tensor_meta = pyds.NvDsInferTensorMeta.cast(
                        user_meta.user_meta_data
                    )

Hi there, Any update on this please?

Hi @mchi
Any clarification on this is please.

Team, Can anyone please look into it. I’ve been waiting for a clarification from you on this.

Sorry for the late response, our team will do the investigation and provide suggestions soon. Thanks

1 Like

Hi @arivarasan.e
can you share what real problem your application run into? Is above code in a probe function? Probe of which plugin?

As you can see in Gst-nvinfer — DeepStream 6.1.1 Release documentation , “output-tensor-meta” is for both PGIE and SGIE.
And, deepstream_python_apps/deepstream_ssd_parser.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub is an exmaple to access the tensor data with “output-tensor-meta=true”.

Regarding “obj_meta”, you may mixed up with “output-tensor-meta” since “output-tensor-meta” is not necessary for access obj_meta.

deepstream_python_apps/deepstream_test_2.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub is an exmaple to access obj_meta.

Hi @mchi
Attaching my pipeline for your reference. Yes, the above code runs in a probe function at the sink_pad of tiler after sgie. There isn’t any problem my application runs into. My question is neither about enabling ‘output-tensor-meta’. I can understand from the sample apps on how to access the output tensor of the model.

My question is about the batch inference if output-tensor-meta enabled. You mentioned that the batched output can be accessed from the tensor_meta. My sgie is an embedding model that outputs (128,) embedding per object. In that case, does each object’s meta contain the batched tensor meta of all objects? Let’s say my primary detector detects 3 objects and the batch-size of the sgie is 4, does the batched inference of the sgie (128,4) available in all 3 object’s meta?
pipeline.py (27.4 KB)

@yingliu / @mchi , Any further clarification on this?

Hi @arivarasan.e
Sorry for confusion!

I don’t think each object’s meta contains the batched tensor meta of all objects, you can find the structure of objmeta in NvDsObjectMeta — Deepstream Deepstream Version: 6.1.1 documentation

Then from which particular object’s meta we can access the batched tensor meta of all objects?

I understand. Though sgie does batch inference, individual tensor meta will be attached to its corresponding object meta. This is the clarification I was looking for. Please correct if wrong.

may I know how you get this conclusion?

Hi, Based on your reply, I conclude this. Do you have a different opinion on this?

I don’t mean that.
As you can find in NvDsObjectMeta — Deepstream Deepstream Version: 6.1.1 documentation , tensor meta is not attached to object meta, that is, you can’t extract tensor meta from object meta.

Yes, I know that the tensor_meta can only be extracted from user_meta of the object_meta.

1 Like

Cool! All got addressed?

1 Like

Yes. Thanks for the clarification.