Avoid primary nv-infer for specific stream ID

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) x86 RTX-3060
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) N/A
• TensorRT Version 8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only) 525.125.06
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,
I have a specific requirement to modify the existing deepstream-app in order to skip or avoid inference for pgie element for specific stream ID this is because in one of the use-case i don’t need primary detector to generate bounding boxes instead pre-defined ROI will be used and in follow-up to this sgie will be used as a secondary classifier, however this can be easily achieved by implementing a separate app and mimicking pgie as a dummy element but i want a single pipeline solution which deals with multiple RTSP streams, out of which few stream’s will work with regular pipeline elements like pgie(detector) -> Tracker -> sgie(classifier) but other streams don’t require pgie as mentioned earlier , one solution that i tried by directly appending the ROI as a desired bounding boxes in the custom post-processing function and the pipeline does the rest as below,

static std::vector<NvDsInferParseObjectInfo> decodeYoloTensor(const int*          num_dets,
                                                                const float*        bboxes,
                                                                const float*        scores,
                                                                const int*          labels,
                                                                const unsigned int& img_w,
                                                                const unsigned int& img_h)
{

    std::vector<NvDsInferParseObjectInfo> bboxInfo;
    size_t                                nums = num_dets[0];
    
 for (size_t i = 0; i < nums; i++) {
 /* extract the bbox from object detected from output layer of the pgie network model and append it to bboxInfo*/
 }
 
 if(asset_roi_cord && shm_fd > 0)
    {
        // Process the list of structs as needed
        roi_len = asset_roi_cord->roi_len;
        for (int i = 0; i < roi_len; i++) {
            NvDsInferParseObjectInfo obj;
            obj.left                = asset_roi_cord->roi[i * 4];
            obj.top                 = asset_roi_cord->roi[i * 4 + 1];
            obj.width               = asset_roi_cord->roi[i * 4 + 2];
            obj.height              = asset_roi_cord->roi[i * 4 + 3];
            obj.classId             = 9;
            bboxInfo.push_back(obj);
        }
    }

    return bboxInfo;
}

but in this case i don’t want to add additional unnecessary toll in latency for pgie element , so as a quick work around i was thinking to some how skip primary inference for given stream ID in gstnvinfer.cpp code, but i don’t know how to do these changes in order to handle with multiple batch scenario which is often a case while dealing with multiple streams.

So what could be the possible work around for this issue.
Thank you.

So some sources in your use-case require pgie, while others do not? Are these sources fixed? Like sources 1,2,3 require pgie, sources 4,5,6 do not.

Yes, source ID’s will be fixed.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There is currently no feasible solution. You can try to achieve it yourself. Add the following code for logical control. If you don’t require pgie, you can use the gst_nvinfer_process_objects API. Because there is no object in your image, it will not be processed by default.

  if (nvinfer->input_tensor_from_meta) {
   flow_ret = gst_nvinfer_process_tensor_input (nvinfer, inbuf, in_surf);
  } else if (nvinfer->process_full_frame) {
   flow_ret = gst_nvinfer_process_full_frame (nvinfer, inbuf, in_surf);
  } else {
    flow_ret = gst_nvinfer_process_objects (nvinfer, inbuf, in_surf);
  }

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.