How to generate INT8 based engine files?

NVIDIA-SMI 525.105.17
Driver Version: 525.105.17
CUDA Version: 12.0
deepstream-6.2

±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:01:00.0 On | N/A |
| 0% 48C P8 35W / 370W | 3825MiB / 12288MiB | 4% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 NVIDIA GeForce … Off | 00000000:02:00.0 Off | N/A |
| 0% 46C P8 20W / 370W | 8MiB / 12288MiB | 0% Default |
| | | N/A |

| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |

I m using Deepstream test 5

How to generate int8-calib-file for INT8 based engine files?

config_infer.txt =

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_tms/models/yolov5m.cfg
model-file=/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_tms/models/yolov5m.wts
#model-engine-file=/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_tms/model_b4_gpu0_fp16.engine
int8-calib-file=calib.table
labelfile-path=rac_labels_v1.txt
batch-size=1
network-mode=1
num-detected-classes=18
interval=0
gie-unique-id=1
process-mode=1
network-type=1
cluster-mode=4
#cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_tms/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#filter-out-class-ids=1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16;17;18;19;20;21;22;23;24;25;26;27;28;29;30;31;32;33;34;35;36;37;38;39;40;41;42;43;44;45#;46;47;48;49;50;51;52;53;54;55;56;57;58;59;60;61;62;63>
[class-attrs-all]
#pre-cluster-threshold=0
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

You can refer to the link below: INT8Calibration.md.

I followed the steps mentioned in it INT8Calibration.md

CUDA_VER=11.8 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo

mkdir calibration

for jpg in $(ls -1 val2017/*.jpg | sort -R | head -1000); do
cp ${jpg} calibration/;
done

realpath calibration/*jpg > calibration.txt

export INT8_CALIB_IMG_PATH=calibration.txt
export INT8_CALIB_BATCH_SIZE=1

after execution of all these steps , not able to find calib.table , where it got saved ?

the steps mentioned they do not define how the calib.table is created. They just grab 1000 images from the dataset, put them inside a folder called calibration, and created a calibration.txt with the absolute paths

may i know how to confirm whether calib.table is created or not?

if its not created , any extra steps i need to do ?

Did you run this command?

deepstream-app -c deepstream_app_config.txt

You can also directly ask the author of this project for questions related to that.

yes i used this command deepstream-app -c deepstream_app_config.txt

but on running this command error comes

File does not exist: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_tms/calib.table
OpenCV is required to run INT8 calibrator

deepstream_tms_app: yolo.cpp:129: nvinfer1::ICudaEngine* Yolo::createEngine(nvinfer1::IBuilder*, nvinfer1::IBuilderConfig*): Assertion `0’ failed.
Aborted (core dumped)

file does not exist error come for calib.table
which is right because calib.table is not created and i cant find where it got saved

Because it comes error, the file cannot be generated. The first time you run this command, the file will be generated.
Since it’s open source and the model is trained by yourself. Please resolve this error first.

How to generate the file ?
i followed the mention steps in INT8Calibration.md , but file not generating .

As I attached before, You need to run the following commands successfully to generate them.

deepstream-app -c deepstream_app_config.txt

But there was an error during your operation as you attached. Please resolve this error first as it’s an open source. It may be a problem with your own model or with the open source project code.

But the error is file do not exist - calib.table
calib.table needs to be generated before i run this command.
deepstream-app -c deepstream_app_config.txt

The first time you run this command, the file will be generated. You can add some log info to debug why it isn’t generated below.
https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/nvdsinfer_custom_impl_Yolo/yolo.cpp#L159

I’m facing the same issue i have converted a model for face recognition to .trt, i have modified the config file according to .trt file i generated. When running the .txt file, engine file creation will show the below error
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.293110370 9137 0xaaaae90980f0 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 2]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::65] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted (core dumped)
please support

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Your problem should be about the setup environment from the log. Please file a new topic and describe your environmental information in detail.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.