I get the same error when I try to run “INT8” on any of the yolo models.
I went over to the “objectDetector_SSD” folder and ran that example in “INT8” and it ran with no errors.
There is a note in that readme file that says:
NOTE: To use INT8 mode, the INT8 calibration file for the SSD model needs to be
provided along with changing the network-mode to 1 in config_infer_primary_ssd.txt.
Refer to sampleUffSSD for running the sample in INT8 mode. The sample writes the
calibration cache to file “CalibrationTableSSD”.
I dont know if that also pertains to the YOLO models?
BY the way
The original issue I had when I stated this was because I didnt realize that you just have to change and save the “config_infer_primary” files. They dont need to be ran on there own. DUH
you need to run cmdline:
deepstream-app -c deepstream_app_config_yoloXXX.txt.
config_infer_xxx.txt is loaded from deepstream_app_config*.txt and specified to nvinfer plugin. you cannot specify it to deepstream-app directly.
Regarding “Calibration Table”, each model has each single INT8 table which cannot shared with other model file. Yolov3’s calibration table just for yolov3. it cannot be shared with yolov3-tiny, yolov2 or yolov2-tiny.
Another thing is about TRT version. All DS4.0’s INT8 calibration file are based on TRT5.1.x. Please ensure TRT version is correct.
Yes, I was using the calibration table for YOLOv3 provided in the same directory: “yolov3-calibration.table.trt5.1” and set network-mode to 1 in the config file, but running:
I was using Jetson Nano and hoping it supports INT8 since FP16 is too slow. Any idea how Jetson Nano can stream 8 channels at 1080P 30FPS like advertised here?
If I remember correctly. Nano doesn’t support INT8.
To get high performance on nano, You may try source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt or source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt which are based on resnet10 and smaller input size.
Regarding YoloV3, detection is more accurate but more heavy network. To get better performance, you may try change input dims(width/height)to a smaller size on yolov3
e.g. yolov3.cfg
width=416
height=416
or use the tiny model. deepstream_app_config_yoloV3_tiny.txt.
some user may customize and retrain a smaller yolo network.
check custom-lib-path, if can deserialize the model, then directly use that. make sure engine file exist. If there’s custom trt plugins, also need to register or preload the lib.
If step 1 failed. It will check custom-lib-path to see whether custom model parser exist. nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so has yolo model-build inside.
WRT yoloV3, you always need keep the custom-lib-path, even model-engine-file was generated. because there’s also a custom trt plugin inside the model.
BTW, you also need enable parse-bbox-func-name since yolo’s bbox parsing is different.