The confidence level of the second-level inference is greater than 1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
1080Ti
• DeepStream Version
Ds6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.0.3.4
• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 510.108.03 • Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I used the second-level inference I defined and got a confidence greater than 1, how should I solve this problem? Is it because I made a mistake when converting onnx to tensorrt?

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

this is my sec config:


[property]
gpu-id=0

net-scale-factor=0.01735207357

model-engine-file=/home/incar/tms/deepstream-6.0/sources/apps/sample_apps/ivideoframe/weights/convnext_base_in22ft1k.onnx.engine

labelfile-path=/home/incar/tms/source/gb/alllable.txt
force-implicit-batch-dim=1
batch-size=1
model-color-format=1
process-mode=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
is-classifier=1
#output-blob-names=output
classifier-async-mode=1
classifier-threshold=0.51
input-object-min-width=20
input-object-min-height=20
operate-on-gie-id=1
operate-on-class-ids=0;
classifier-type=carcolor
gie-unique-id=2
num-detected-classes=882
#offsets=133.675;116.28;103.53
offsets=123.675;116.28;103.53

#scaling-filter=0
#scaling-compute-hw=0

  1. what is the model pipeline or use ucase? dose the model work fine by the thirdparty tool?
  2. how did you get the confidence level?
  3. num-detected-classes is only for detection model.

1.I used this pipline: image–>yolov7 -->Classification ,so ,when yolov7 detect object,and object be croped to classifcation. The confidence great than 1
2. I try to run timm Classification ,the pt result is right,so I convert to onnx,and trtexec to engine.I used deepsteam_app.c 's result confidence.my code like this:

if (!create_pipeline(appCtx[i], after_ie_image_meta_save,
                         nullptr, perf_cb, overlay_graphics))
...

tatic void
after_ie_image_meta_save(AppCtx *appCtx, GstBuffer *buf,
                         NvDsBatchMeta *batch_meta, guint index)
...
for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj != nullptr;
         l_obj = l_obj->next)
    {
      NvDsObjectMeta *obj_meta = static_cast<NvDsObjectMeta *>(l_obj->data);
      if (!obj_meta_is_above_min_confidence(obj_meta) || !obj_meta_box_is_above_minimum_dimension(obj_meta))
        continue;
      at_least_one_confidence_is_within_range = true;

nvinfer plugin is opensource, you can add log in ClassifyPostprocessor::fillClassificationOutput and ClassifyPostprocessor::parseAttributesFromSoftmaxLayers to debug.
float probability = outputCoverageBuffer[c]; this probability should be classification inference result.

ok,how can I found the second classification’s source code and debug it ?

you can find ClassifyPostprocessor::fillClassificationOutput
indeepstream_sdk_v6.2.0_x86_64\opt\nvidia\deepstream\deepstream-6.2\sources\libs\nvdsinfer\nvdsinfer_context_impl_output_parsing.cpp

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.