Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
1080Ti
• DeepStream Version
Ds6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.0.3.4
• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 510.108.03 • Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I used the second-level inference I defined and got a confidence greater than 1, how should I solve this problem? Is it because I made a mistake when converting onnx to tensorrt?
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
this is my sec config:
[property]
gpu-id=0
net-scale-factor=0.01735207357
model-engine-file=/home/incar/tms/deepstream-6.0/sources/apps/sample_apps/ivideoframe/weights/convnext_base_in22ft1k.onnx.engine
labelfile-path=/home/incar/tms/source/gb/alllable.txt
force-implicit-batch-dim=1
batch-size=1
model-color-format=1
process-mode=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
is-classifier=1
#output-blob-names=output
classifier-async-mode=1
classifier-threshold=0.51
input-object-min-width=20
input-object-min-height=20
operate-on-gie-id=1
operate-on-class-ids=0;
classifier-type=carcolor
gie-unique-id=2
num-detected-classes=882
#offsets=133.675;116.28;103.53
offsets=123.675;116.28;103.53
#scaling-filter=0
#scaling-compute-hw=0