INT8 Calibration on Yolo model

I’m having trouble finding any documentation or resources on how to generate an int8 calibration table. The best thing I can find is from this thread (https://devtalk.nvidia.com/default/topic/1057147/tensorrt/tensorrt-yolo-int8-on-gtx-1080ti) where it’s stated that

With the new EntropyCalibrator2, I believe you'll have to generate a new calibration table (for the same reason the code initially failed).

You can do that by providing paths to your own images for calibration as mentioned here: https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/yolo#note (bullet point #1)

However, that link no longer works because Yolo support has been rolled directly into Deepstream. Looking back at a previous branch shows that bullet point #1 says:

“If you want to generate your own calibration table, use the calibration_images.txt file to list of images to be used for calibration and delete the default calibration table.”

But this is not helpful as it doesn’t indicate where/how the calibration_image.txt should be passed, and even so, this documentation (and trt-yolo-app) is outdated.

Basically, my question is: what is the current, up to date, supported method to generate a calibration file for yolo using Deepstream SDK 4 and are there any resources that detail the process and data required to do so?

Hi,

trt-yolo-app was compatible for DS3.0 and no longer supported on the github repo. We have plans to add support for a calibration app for yolo models which is compatible with DS 4.0 in the future.

If you are interested you can modify the sources from the old trt-yolo-app to be compatible with DS4.0 and use it since the code is available on github(on previous releases).

Hi, thank you for replying.

The issue I’m having is that the DS 4.0 SDK comes with a prebuilt yolov3 calibration table for the pretrained model, which from what you’re saying is that DS 4.0 does not have a native or “out of the box” ability to generate said table. In that case the inclusion of the table is misleading without some explanation about the why and how. I’ve spent quite a lot of time pouring through documentation and even some of the DS 4.0 code to find where a parameters to pass a list of images to generate a calibration table would be.

Even with the DS 3.0 trt-yolo-app it isn’t obvious how to go about generating that table. The documentation just says:

“If you want to generate your own calibration table, use the calibration_images.txt file to list of images to be used for calibration and delete the default calibration table.”

But where should that txt file be placed? How is it referenced? Which parameter is configured to point to it?

These are the questions I was trying to ask in my original post. The documentation for this process is non-existent and it’s difficult and time consuming to decipher without Nvidia’s help.

Hi,

Yes, currently there is no support in the SDK to perform calibration. Regarding trt-yolo-app,

You can checkout this commit of the repo - https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/566dd4551e0ba160bcf26789eb8b02fc3f3e591b

Here’s the calibration_images.txt - https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/566dd4551e0ba160bcf26789eb8b02fc3f3e591b/yolo/data/calibration_images.txt

It’s being referenced here in the config file- https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/566dd4551e0ba160bcf26789eb8b02fc3f3e591b/yolo/config/yolov3.txt#L45

Once you build the app, you can get a list of all the config options of the app using

$ trt-yolo-app --help

as specified in the instructions here - https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/566dd4551e0ba160bcf26789eb8b02fc3f3e591b/yolo#trt-yolo-app

Hi NvCJR,

I seem to be able to build a calibration table, but whenever I try to use the table or the generated .engine file I get this error:

regionFormat.cpp:65: size_t nvinfer1::RegionFormatB::memorySize(int, const nvinfer1::Dims&) const: Assertion `batchSize > 0' failed

Which version of tensorRt did you build the trt-yolo-app with ?

I’m using TensorRT v5.0.2.6

I also tried with TensorRT v5.1.5.0, same results.

Hi,

Sorry for the delay, we are tracking this request and will provide a solution for calibration in a future release, please stay tuned.

HI, is the calibration of yolov3 in deepstream released?

@CJR.
Thanks for the reply. But I found in the new release, the trt-yolo-app is not exists.
How could I create my own calibration file for yolov3 in DeepStream5?
Like the bundled file “yolov3-calibration.table.trt7.0”. How could I generate my own version? Thanks!