How can I find out which roi inference result came from among the obj information of obj_meta_list from the preprocess result?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, I applied the roi in the preprocess as follows
roi-params-src-0=0;0;1920;1080;0;200;640;640;640;200;640;640;1280;200;640;640

 NvDsMetaList *l_user_meta = NULL;
    NvDsUserMeta *user_meta = NULL;
    NvDsDisplayMeta *display_meta = NULL;
    for (l_user_meta = pBatchMeta->batch_user_meta_list; l_user_meta != NULL;
        l_user_meta = l_user_meta->next)
    {
        user_meta = (NvDsUserMeta *)(l_user_meta->data);
        if (user_meta->base_meta.meta_type == NVDS_PREPROCESS_BATCH_META)
        {
            GstNvDsPreProcessBatchMeta *preprocess_batchmeta =
                (GstNvDsPreProcessBatchMeta *)(user_meta->user_meta_data);
            guint roi_cnt = 0;
            for (auto &roi_meta : preprocess_batchmeta->roi_vector)
            {
                NvDsMetaList *l_user = NULL;
                NvDsMetaList* pObjectMetaList = roi_meta.frame_meta->obj_meta_list;
                 // For each detected object in the frame.
                while (pObjectMetaList)
                {
                    // Check for valid object data
                    // NvDsObjectMeta* pObjectMeta = (NvDsObjectMeta*)(pObjectMetaList->data);
                    // std::cout << "image_id: " << roi_meta.frame_meta->frame_num << std::endl;
                    // std::cout << "bbox: {" << pObjectMeta->rect_params.left << ", "
                    //                         << pObjectMeta->rect_params.top << ", "
                    //                         << pObjectMeta->rect_params.width << ", "
                    //                         << pObjectMeta->rect_params.height << "}" << std::endl;
                    // std::cout << "score: " << pObjectMeta->confidence << std::endl;
                    // std::cout << "category_id: " << pObjectMeta->class_id << std::endl;
                    
                    pObjectMetaList = pObjectMetaList->next;
                }

It seems to refer to the same pointer in any Roi meta.I want to know the results obtained from Roi 0 ,1,2,3 separately, how can I do that?

You can refer to the open source GstNvDsPreProcessBatchMeta. The roi_vactor is a vector structure. So it should have be separated already.

What do you mean? I want to see the inference results for each roi separately, rather than the results for all roi. I’m doing it by referring to the GstNvDsPreProcessBatchMeta you said in the code above. Anyway, even though I refer to obj_meta_list in roi_meta, the same value comes out because each roi_vector refers to the same pointer address.

Could you try to get each roi with the roi_meta from the roi_vector?

Well, if you look at the code above, don’t you see NvDsMetaList* pObjectMetaList = roi_meta.frame_meta->obj_meta_list;

I think if there is a NvDsObjectMetaList property in roi_vector, it seems to be there, but it’s not there. Are you sure you are storing the inference results for each Roi separately?
Please don’t no tail biting to me, give me an answer with code as far as I know NVIDIA DeepStream SDK API Reference: NvDsRoiMeta Struct Reference | NVIDIA Docs

OK, I got what you mean. At present, we do not have variables for mapping ROI and Object. Could you tell us what your usage scenarios to separate the ROI are?

Oh, it’s a sad fact. As I mentioned in the question, the current roi-params-src is below

roi-params-src-0=0;0;1920;1080;0;200;640;640;640;200;640;640;1280;200;640;640

As you can see, 0;0;1920;1080 means full frame and the other 3 means roi. Because of overlapping parts (between full frame and 3 roi), different boxes appear for the same object. In order to apply this merge algorithm developed by me, I would like to obtain inference results for each roi.

Is there a method…?

We’ll check that. But why you use the ROIs with overlapping parts in your usage?

  1. First, must infer the Full frame (most accurate)
  2. The reason why roi is needed is, for example, in a scene looking forward from a vehicle, there is a part where there is an object far away.
  3. And if there is no overlap area, if the object is large, if only the roi inference is used, the object will be cut off and one object will be detected left and right in the two roi. So I need inference for the overlapping part

did you understand? If so, please check for this function.

OK. If there is no overlap, you can just seperate object with the coordirate of the bbox. But it’s not applicable in your scenario. We will take this as an enhancement in our roadmap.

Ok. I would appreciate it if you would notify me at the point of improvement

Sure. Please also follow our version updates. Thanks