Generate calibration file

I’m trying to optimize my model to INT8.
But I have not found the way to create calibration file from own dataset.
I refereed this sample and create my program as below.
Is this correct ?
Int8 Calibration In TensorRT

And I tried to convert my model into INT8 using my cache.
The engine was created, but the volume is as same as FP16.
My command is as below.

  trtexec --onnx=./myModel.onnx --saveEngine=./myModel_int8.engine --workspace=2048 --best --calib=./calibdata.cache

and

trtexec --onnx=./myModel.onnx --saveEngine=./myModel_int8.engine --workspace=2048 --int8 --calib=./calibdata.cache

create_cache.py

import numpy as np
import cv2

dataList = ["dataset_1.mp4",
	        "dataset_2.mp4",
	        "dataset_3.mp4"
	    ]
interval = 30
resizedSize=(myHieght, myWidth) # Height, Width
calibDataFile='calibdata.cathe'
with open(calibDataFile,'wb') as f:
	for fileName in dataList:
	    print(fileName + " is loading\n")
	    cap = cv2.VideoCapture(fileName)
	    idx = 0
	    while(1):
	        ret, frame = cap.read()
	        if not ret:
	            break
	        if idx % interval == 0:
	            dispFrame = frame
	            #cv2.imshow("image", dispFrame)
	            #cv2.waitKey(1)
	            resizedFrame = cv2.resize(frame, resizedSize)
	            resizedFrame = resizedFrame.astype(np.float32)/255.0    #Normalize
	            print(resizedFrame.shape)
	            calib_data = resizedFrame[:,:,::-1].transpose(2,0,1).astype(np.float32)
	            print(calib_data.shape)
	            print(calib_data.dtype)
	            #print(calib_data[0,0:10,0])
	            calib_data = calib_data[np.newaxis]
	            print(calib_data.shape)
	            #f.write(calib_data.data)
	            calib_data.tofile(f)
	        idx = idx + 1

Hi,

Do you mean the size of myModel_int8.engine?

INT8 doesn’t guarantee a smaller engine size.
Since it’s decided by the algorithm and implementation.

But INT8 should give you a better performance.
Do you observe this compared to FP16 engine?

Thanks.

Hi
yes, I mean the size of int8.engine.
The evaluation is yet to come.
What about calibration?

Hi,

YES. You can use the sample to do the calibration.
Below is another example from the community for your reference:

Thanks.

Hi
Thank you for your information.
I tried my cache and could use with trtexec.
But the bacth of my cache is one.
How could I divide my cache into batch ?

Hi,

The calibration cache should be independent of the batch size.
Do you meet any issues when using the cache for different batch numbers?

If yes, could you share the model and cache file for our investigation?

Thanks.

@AastaLLL do I need generate calibraiton cache file on the jetson devices or I can do it on like x86 server?

Hi,

Please do it on Jetson.
Since TensorRT inference is hardware-dependent.

Thanks

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.