DeepStream implementation of general YoloV2 and YoloV3 to INT8 precision enginefile

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**T4
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version7.0.11
**• NVIDIA GPU Driver Version (valid for GPU only)**440

DeepStream implementation of general YoloV2 and YoloV3 to INT8 precision enginefile, also is there any calibration files necessary to be given??

Hi @GalibaSashi,
please check /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo , there is YoloV2, YoloV3 and its INT8 calibration file, but no YoloV2 int8 calibration file.

Thanks!

Hi @mchi
Hi what to do about INT8 calibration file for YoloV2? Kindly do help

HI @GalibaSashi,
You should build it by yourself.
To get good inference accuracy, user need to build the INT8 calibration with typical inference pictures of their inference targets, that means, even there is INT8 calibration file in DeepStream, it may not work for you.

Thanks!

Hi @mchi
can you give out the steps on how to build calibration table for the yolov2 or yolov3. If one example is given I can replicate the same for all?

Hi @GalibaSashi,
You can find the INT8 calibration API and sample in TensorRT samples, e.g. sampleINT8

Hi @mchi
I had already checked the same and only for classification models the samples are there and the sample datasets given to create the calibration table are also in classifiction type labels

  1. Can you give out the specific steps to follow.
    2)Can you tell me if the calibratio table is absolutely necessary and what will be the effects if I dont use it and ryn thee deepstream-app in INT8 mode

Have you checked the TRT sample? It’s the right and straightforward way to build INT8 calibration file.

If you want to use INT8 to get right inference output, INT8 calibrarion file is must-have.
or you could use fp16 which does not require calibration.

Yes I have checked the same ,there are two types sampleINT8 and sampleINT8API right . sampleINT8API specifically says it supports resnet 50 type classification models.While in sampleINT8 the dataset given is in ubyte or binary file and the type of dataset is classification. No detection type examples are there.

INT8 calibration has nothing to do with networtk type, only requirement is it can run FP32 precision with TensorRT.
Developer Guide :: NVIDIA Deep Learning TensorRT Documentation may help you understand more about INT8 calibraion.

And, https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf

Oh ok thank you for your support. Cheers

Hi @mchi
The dataset given for calibration is in binary form which is used for classification and the labels are classification type and not coordinates. I want to use detection dataset for calibration

I don’t understand your question, do you want us to provide the calibration data?

No actually the dataset given in sample is train-images-idx3-ubyte which is a binary form wherethe labels are of classification. The same can not be used for calibration of detector models right?

yes, you need to prepare your own dataset which can be any kinds of image data, e.g. jpg, png, ppm, etc.

I have the dataset in jpg as well as png and its supporting label files… and in what format should I supply the same?? should I make it into binary or can I give it diectly. How should I approach the same.

Hi @mchi Kindly help

Hi @GalibaSashi,
You need to change the data parser part from parsing train-images-idx3-ubyte to parsing your image data, e.g. png files.

If I donot give any --calib parameter will there be any difference in performance as in throughput etc(Accuracy is not a concern) in trtexec.

for performance, no difference. trtexec will create a dummy calibration data, with which, the infer perf is the same.