Deepstream loads standard libnvds_infer.so rather than my custom Yolo parser

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

i have 2 Resnets and a Yolo is my pipeline. While i define custom lib path in Yolo’s config, Deepstream doesn’t load libnvds_infer.so from that path.

It loads from /opt/nvidia/deepstream/deepstream-6.4/lib.
I want it to load from my own path. because 2 Resnets will use this standard lib and my Yolo will use the custom one.

I compiled my lib for Yolo here:
parse-bbox-func-name=parseBoundingBox
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer/libnvds_infer.so

What have you customized in your /opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer/libnvds_infer.so?

I customized my bounding box parsing in nvdsinfer_context_impl_output_parsing.cpp

Did you implement the parseBoundingBox function with the NvDsInferParseCustomFunc interface?
Have you export the parseBoundingBox function correctly?

We have the samples of bboxes parsing postprocessing. Please refer to NVIDIA-AI-IOT/deepstream_tao_apps at release/tao5.1_ds6.4ga

I just modified it to recognize my output layer and its structure

bool
DetectPostprocessor::parseBoundingBox(vector const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
vector& objectList)
{
int outputLayerIndex = -1;

// Find the output layer by name ("output0")
for (unsigned int i = 0; i < outputLayersInfo.size(); i++)
{
    if (strstr(outputLayersInfo[i].layerName, "output0") != nullptr)
    {
        outputLayerIndex = i;
        break;
    }
}

// …

when i compile it, and paste and replace the libnvds_infer.so file in /opt/nvidia/deepstream/deepstream-6.4/lib, my code executes
but if i give path directly to libnvds_infer.so in any other folder except lib, it doesn’t work and the default libnvds_infer.so loads

No. Please don’t change the /opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer/ source code to custom the bbox parsing function. We have provided the customization interfaces. Please refer to the samples. NVIDIA-AI-IOT/deepstream_tao_apps at release/tao5.1_ds6.4ga
and NVIDIA DeepStream SDK API Reference: nvdsinfer_custom_impl.h File Reference | NVIDIA Docs

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.