Creating calibration file for custom yolo v8 model in deepstream7.1

• Hardware Platform (Jetson / GPU) NVIDIA geforce RTX 3090
• DeepStream Version Deepstream SDK 7.1
• NVIDIA GPU Driver Version (valid for GPU only) 560.35.03
I have set up all the requirements for running multistream object detection with the sample deepstream_test_3.py provided by Nvidia using docker container with nvcr.io/nvidia/deepstream:7.1-triton-multiarch as my base image . Now i wanted to use custom yolov8 model to do object detection. kindly guide me through the process.
Also, how can i generate cal_trt.bin and engine file inside the deepstream container for my custom yolov8 model using python.

please refer to this faq for how to use yolov8 in deepstream. please refer to this topic for how to create calibration file.

Thank you for the reference , i would like to clarify further I am a newbie to deepstream and I am actually looking much more generalised approach to do inference using custom model. As of now i am building the pipeline on top of the deepstream_test_3.py for multistream inference in docker container. if you can guide me through how to use custom model in a generalised way and not just for yolo it will be helpful. also please let me know how i can generate the cal_trt.bin file mentioned in the config file. Thanks in advance.

if using new model, you only need to modify nvinfer configuration file dstest3_pgie_config.txt. cal_trt.bin is for network-mode=1, which means int8 for inference. if you don’t have cal_trt.bin, you can set network-mode=0 or 1. if you want to generate cal_trt.bin, please refer to my last comment.

DeepStream-Yolo/docs/INT8Calibration.md at master · marcoslucianops/DeepStream-Yolo · GitHub mentions how to perform calibration particularly for yolo-models and the file generated here is a .table not a .bin file. how can i perform calibration and generate an engine file and bin file if am using a transformer based or any other custom model in general . Also do i need to add the following in my config file if i am using a custom model.

For Custom detector

parse-bbox-func-name=NvDsInferParseCustomModel
custom-lib-path=/path/to/this/directory/libnvds_infercustomparser.so

  1. As the link shown, ath the first time, please the following cfg.
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
network-mode=0

after running, deepstream will generate TRT engine model_b1_gpu0_fp32.engine. if you want to geneate calib.table, please use the following cfgs. ```

model-engine-file=model_b1_gpu0_int8.engine
int8-calib-file=calib.table
network-mode=1
  1. nvinfer plugin and the low-level lib are opensource. the default postprocess functon DetectPostprocessor::parseBoundingBox is for resnet10 model. if the custom mdoel is resnet10 model, please add the custom postprocess cfgs and function. please refer to the cfg and function.

As I have mentioned i dont want to generate calib.table rather i want to generate calib.bin

calib.table and calib.bin are the same thing. both are int8-calib-file.

so running deepstream itself will generate the engine and calib file just by mentioning in the config file is that what you mean?

there are two steps for generating calibration file.

  1. use no-int8 model to generate TRT engine.
  2. use int8 model and to geneate int8-calib-file.
    please refer to my last two comments. at the step1, you don’t need to set int8-calib-file, after running, deepstream will not generate int8-calib-file, only generate engine. at the step2, as the doc shown, you need to set int8-calib-file even there is no calib-file. after running, deepstream will generate int8-calib-file. please refer to this code for how to genearte calibration file.

as i have mentioned before the document you are providing is particularly for yolo models . i would like to know a more generalised method for any custom model.

if using custom model, there are two steps as well.

  1. please refer to my second comment for how to generate engine for new model.
  2. this doc also applies to new model. especially config_infer_primary_yoloV8.txt needs to be adjusted for new model, and in the step3 of doc, please use the dataset of new model.

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
onnx-file=/workspace/Primary_Detector/yolov8n.onnx
model-engine-file=/workspace/Primary_Detector/yolov8n_fp32.engine
labelfile-path=/workspace/Primary_Detector/labels.txt
#int8-calib-file=/workspace/Primary_Detector/cal_trt.bin
batch-size=4
process-mode=1
model-color-format=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
cluster-mode=2

[class-attrs-all]
pre-cluster-threshold=0.2
topk=20
nms-iou-threshold=0.5

This is my config.txt and the following is the error i got.

nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:46 Cannot access ONNX file '/workspace/Primary_Detector/yolov8n.onnx ’
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:673 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:518 failed to build network.
0:00:05.518355015 30 0x5e431a82a5e0 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:05.762268276 30 0x5e431a82a5e0 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2213> [UID = 1]: build backend context failed
0:00:05.762290509 30 0x5e431a82a5e0 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:05.762325098 30 0x5e431a82a5e0 WARN nvinfer gstnvinfer.cpp:914:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:05.762328705 30 0x5e431a82a5e0 WARN nvinfer gstnvinfer.cpp:914:gst_nvinfer_start: error: Config file path: custom_model_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(914): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: custom_model_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

sorry for the late reply! from the error, nvinfer plugin can’t access the model. could you use “ls” command to make sure the model exists?