How to correctly calibrate a YOLO model in deepstream

Hi there. My setup is the following:

Jetson Xavier
DeepStream 5.0
JetPack 4.4
TensorRT 7.1.3
NVIDIA GPU Driver Version 10.2

I was trying to calibrate my own yolov3 model and generate the int8 engine. I followed the method mentioned here step by step:How can I generating the correct int8 calibration table with parsing YOLO model? · Issue #747 · NVIDIA/TensorRT · GitHub

I was able to compile and generate the my own calibration cache and int8 engine file and conduct inference. However the generated engine provides no detection result. As I replaced my calibration cache with the given example “yolov3-calibration.table.trt7.0”, the generated int8 engine could detect objects correctly.

So I’m wondering if anyone can tell me how “yolov3-calibration.table.trt7.0” is generated in the first place, or what’s possible that went wrong in my process?

Let me know if there is anything else you need to know. Any advice is appreciated!

Thanks

Hi,

Please noted that you will need to apply the calibration on the same platform with the same TensorRT version.
Do you also do the calibration on Xavier with JetPack4.4?

Thanks.

Yes, I calibrated and deployed the model on the same machine, a Xavier with JetPack4.4

I also have narrowed the problem into the following lines. When I replaced the custom calibration file “test.calibration” with the provided “yolov3-calibration.table.trt7.0”, detection results were correct:

Int8EntropyCalibrator calibrator1(1, "image_list_test.txt", "", "test.calibration", m_InputSize, m_InputH, m_InputW, m_InputBlobName,m_NetworkType);

auto config = builder->createBuilderConfig();
config->setAvgTimingIterations(1);
config->setMinTimingIterations(1);
config->setMaxWorkspaceSize(1<<20);
config->setFlag(nvinfer1::BuilderFlag::kINT8);
config->setInt8Calibrator(&calibrator1);

// Build the engine
std::cout << "Building the TensorRT Engine..." << std::endl;
nvinfer1::ICudaEngine * engine = builder->buildEngineWithConfig(*network,*config);

Hi,

Please noted that there is a TensorRT engine file parameter in the Deepstream.
For example, model-engine-file=[model/name]_b1_fp16.engine.

If the engine file is pre-existing, Deepstream will deserialize it without re-converting from the model.
Is it possible that there is an engine generated from the other calibration file exist?

Thanks.

Thanks for your suggestion. I can confirm that there is no pre-existing engine file though.

Hi,

Could you share the calibration data with us?
We want to check the data for some information.

Thanks.

Hi,
Here are two calibration cache I have generated.
The test1.calibration is for a YOLOv3 model with only one class.
The test2.calibration is for a YOLOv3 model with 9 classes.
Both generated engines produce no detection result.

Thanks.

test1.calibration (12.4 KB) test2.calibration (12.4 KB)

In case anyone is interested, I figured it out. The calibration pictures need to be normalized. Evidently the DeepStream YOLO implementation does it outside the network object so we need to do it again in the calibration step.

Hi,

Thanks for sharing the status.

The preprocessing should be aligned between calibration and runtime(Deepstream).
In Deepstream, there is a default normalization applied.

You can find this information below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

Thanks.

Hi,

I tried calibration using this code.
( https://github.com/enazoe/yolo-tensorrt )
But, This calibaration code have not normalization.
Could you share me your normalization code?

Thanks.

Hi,
I can’t share the code with you but you can probably tweak the getBatch code to get what you want.
Also the moderator said that there is actually no need to normalize the image again so you may not be able to replicate the result.
Best.

Hi,

We can’t find a calibration implementation in the GitHub source you shared.
Could you point out the detailed source to us first?

Thanks.

Thanks for your response.

I copied this calibration code.
(https://github.com/enazoe/yolo-tensorrt/blob/master/modules/calibrator.cpp)

And, I merged this calibrator with Deepstream.

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo/yolo.cpp

std::cout << "Building the TensorRT Engine..." << std::endl;
Int8EntropyCalibrator calibrator( 1, "./coco.txt", "", "coco.table", 1108992 , 608, 608, "data", "yolov3");
nvinfer1::IBuilderConfig* config = builder->createBuilderConfig();
config->setMaxWorkspaceSize(1<<30);
config->setFlag(nvinfer1::BuilderFlag::kINT8);
config->setInt8Calibrator(&calibrator);
nvinfer1::ICudaEngine * engine = builder->buildEngineWithConfig(*network,*config);

And, I got the calibration table.
But, no detection result…

OMG…
I got a result!!
Thanks…!!

Hi,

Good to know this.
Do you also meet the same normalization issue as srsjd?

Thanks.