🖥️ Environment
-
DeepStream version: 7.0.0
-
TensorRT version (C++/DeepStream): 8.6.1.6
-
TensorRT version (Python): 10.13.2.6 (note: mismatch, but DS links against 8.6.1.6)
-
CUDA version: 11.5 (from nvcc)
-
GPU driver: 535.230.02
-
GPU: NVIDIA RTX A4000 (16GB)
-
CUDA reported by driver: 12.2
-
OS: Ubuntu 22.04.5 LTS, kernel 6.8.0-65-generic
Implementation
-
Model: Custom trained YOLOv8 (Ultralytics export)
-
Export:
yolo export model=custom_yolov8.pt format=onnx opset=12 -
ONNX opset: 12
-
Precision: Trying INT8 (with calibration table)
\[property\]
net-scale-factor=0.0039215697906911373
model-color-format=0
int8-calib-file=/home/proglint-ai10/interns/Optimization/quantisation/Quantization-YOLOv8/calib.table
onnx-file=checkout.onnx
labelfile-path=cash_checkout.txt
network-mode=1 # INT8
num-detected-classes=18
interval=0
gie-unique-id=3
process-mode=1
network-type=0
network-input-order=0 # NCHW
cluster-mode=2
maintain-aspect-ratio=1
scaling-filter=1
symmetric-padding=1
offset=114;114;114;
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=../nvdsinfer_custom_impl/libnvdsinfer_custom_impl_Yolo.so
\[class-attrs-all\]
nms-iou-threshold=0.50
topk=300
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1129 Build engine failed from config file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:821 failed to build trt engine.
ERROR: NvDsInferContextImpl::buildModel() build engine file failed
ERROR: NvDsInferContextImpl::generateBackendContext() build backend context failed
ERROR: NvDsInferContextImpl::initialize() generate backend failed, check config file settings
WARN : error: Failed to create NvDsInferContext instance
NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Can Anyone help me how to debug this? I’ve followed everything available in the git and forums. Nothing seems working i donno what am i missiong