Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’m trying to create a calibration table for a custom yolov3 model. I searched for a solution and found this:
- Import TensorRT:
import tensorrt as trt
- Similar to test/validation files, use a set of input files as a calibration files dataset. Make sure the calibration files are representative of the overall inference data files. For TensorRT to use the calibration files, we need to create a batchstream object. A batchstream object are used to configure the calibrator.
NUM_IMAGES_PER_BATCH = 5 batchstream = ImageBatchStream(NUM_IMAGES_PER_BATCH, calibration_files)
- Create an Int8_calibrator object with input nodes names and batch stream:
Int8_calibrator = EntropyCalibrator([“input_node_name”], batchstream)
- Set INT8 mode and INT8 calibrator:
config.set_flag(trt.BuilderFlag.INT8) config.int8_calibrator = Int8_calibrator
My question is where do I put this code? In what file?