How to set ProfilingVerbosity::DETAILED in Deepstream?

I use this example from NVIDIA yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub to generate fp32 model. But when I export graph.json file, it only contains layer name. How to set ProfilingVerbosity::DETAILED in Deepstream to get full graph.json later? I search for in /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-app/ but I didn’t find where I need to set. Sorry for unfamiliar with C++.
I also tried changing

    void setProfilingVerbosity(ProfilingVerbosity verbosity) noexcept
    {
        // mImpl->setProfilingVerbosity(verbosity);
        mImpl->setProfilingVerbosity(ProfilingVerbosity::DETAILED);

    }

But it is affected.
Is there any way to set it in deepstream_config.txt? Thanks.

Do you mean to generate TensorRT model engine?

@Fiona.Chen Sorry for confusing. I mean that I used the repo yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub for generating fp32 engine model, but in deepstream default ProfilingVerbosity is set to kLAYER_NAMES_ONLY. I want to set ProfilingVerbosity::DETAILED when run Deepstream app, but I dont know how to set it.

I tried changing TensorRT source code at /usr/include/x86_64-linux-gnu/NvInfer.h as follow

    void setProfilingVerbosity(ProfilingVerbosity verbosity) noexcept
    {
        // mImpl->setProfilingVerbosity(verbosity);      // ORIGINAL
        mImpl->setProfilingVerbosity(ProfilingVerbosity::DETAILED);       // I change to this line

    }

I don’t know which API Deepstream use to generate .engine model, trtexec? It seem not effect.

The reason I want to set ProfilingVerbosity::DETAILED when run Deepstream app is that I want to use TensorRT/tools/experimental/trt-engine-explorer at main · NVIDIA/TensorRT · GitHub to explorer my generated fp32 engine. Trt engine explore need full graph json file to have more information of layers. If I don’t set ProfilingVerbosity::DETAILED when run Deepstream app, graph json file only contains layer names, not contains information about weights, layer type…

I used this command to export profile json file and graph json file from generated .fp32 model

/usr/src/tensorrt/bin/trtexec --loadEngine=../fp32.engine --exportLayerInfo=graph.json --exportProfile=profile.json

Thanks.

You can add it in NvDsInferStatus
TrtModelBuilder::configCommonOptions() function in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp. And then rebuild the “libnvds_infer.so” in the directory /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/. Please read /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/README.

After that, the new “libnvds_infer.so” can be put in /opt/nvidia/deepstream/deepstream/lib/ to replace the old one.

Thanks. I will try it.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. This topic will be closed if there is no update. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.