How to load cached engine in deepstream to accelerate the programe starting?

I try to run deepstream’s samples in nano, I feel too slow in starting when to convert the caffe model to trt model, I noticed a trt model writed when runing… so ,how to used it ?

this is test1’s config file:

[property]

gpu-id=0

net-scale-factor=0.0039215697906911373

model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel

proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt

labelfile-path=../../../../samples/models/Primary_Detector/labels.txt

int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin

batch-size=1

network-mode=1

num-detected-classes=4

interval=0

gie-unique-id=1

output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid



[class-attrs-all]

threshold=0.2

eps=0.2

group-threshold=1

and this is console log when run test1

deepstream-test1-app /usr/share/visionworks/sources/data/pedestrians.h264

Now playing: /usr/share/visionworks/sources/data/pedestrians.h264



Using winsys: x11 

Opening in BLOCKING MODE 

Creating LL OSD context new

0:00:01.154296444  7881     0x378ddf20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files

0:00:01.154511303  7881     0x378ddf20 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.

0:01:52.210185315  7881     0x378ddf20 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /usr/local/liu/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine

Running...

NvMMLiteOpen : Block : BlockType = 261 

NVMEDIA: Reading vendor.tegra.display-size : status: 6 

NvMMLiteBlockCreate : Block : BlockType = 261 

Creating LL OSD context new

so,how to use the /usr/local/liu/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine file to accelerate the program when starting?

I’m trying to see how to speed up the process just like you and I still haven’t had the answer.
I’m having a hard time understanding this.

Set model-engine-file, pls refer NVIDIA DeepStream SDK Developer Guide — DeepStream 6.1.1 Release documentation