Can not use libtorch in nvdsparserbbox function

I am using a custom nvdsparsebbox_Yolo.cpp with custom nms algorithm in [Yolov5-in-Deepstream-5.0/nvdsparsebbox_Yolo.cpp at master · DanaHan/Yolov5-in-Deepstream-5.0 · GitHub], everything runs well, the num_obj_meta is correct. But when I replace the nms with libtorch version like vision::ops::nms to speed up, in most cases the num_obj_meta is 0. Moreover, even if I add some libtorch operations that don’t affect nms results in the code that uses the original nms, the problem remains. However, the libtorch-version nms result is the same as the orginal nms, so I am confused with these behavior.

Part of code is as follows:

Blockquote
num_obj_meta is correct:

static pool NvDsInferParseYoloX(
std::vector const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
std::vector& objectList
)
{

std::vector res;
float *prob = (float *)outputLayerInfo[0].buffer;
// auto options = torch::TensorOptions().device(torch::kCUDA); // add some libtorch operation in orignial code also raises the problem
decode(prob, img_w, img_h);
nms(res, prob, CONF_THRESH, NMS_THRESH);
for(auto &r:res)
{
// put res result in objectList

}
}

Blockquote
Blockquote
num_obj_meta is 0 or little:

static pool NvDsInferParseYoloX(
std::vector const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
std::vector& objectList
)
{

std::vector res;
float *prob = (float *)outputLayerInfo[0].buffer;
decode(prob, img_w, img_h);
auto options = torch::TensorOptions().device(torch::kCUDA); // same problem using kCPU
auto out = torch::from_blob(prob, {sizes of model output});
// extract bboxes from out tensor after CONF_THRESH filter

// extract confs from out tensor after CONF_THRESH fliter

auto nms_index = vision::ops::nms(bboxes, confs, NMS_THRESH); // libtorch-version nms

    // put output tensor into objectList
    ....

}

Blockquote
Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : GPU TITANXP
• DeepStream Version : 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version : 8.2.0.6
• NVIDIA GPU Driver Version (valid for GPU only): 460.67
• Issue Type( questions, new requirements, bugs): questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Sorry for the late response, is this still an issue to support? Thanks

Thank you for your attention. Yes, it is still an issue.

Seems there are still miss something in your customized parser.

I have solved the problem. The key point is the classId in NvDsInferParseObjectInfo. In our model, there is only one class. the classId after the original nms is indeed 0 although it is not explictly assigned a value of 0(classid 0). But the classId after libtorch-version nms is random, so when it is assigned to be 0, it works.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.