DeepStream-Yolo:Can't generate engine file but run successfully

my env:
deepstream-app version 6.0.0
DeepStreamSDK 6.0.0
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 8.0
cuDNN Version: 8.2
Run deepstream-app -c deepstream_yolov5n.txt
in jetson nano, yolov5n, it runs successfully but doesn’t generate engine file.
Every time I run, it will consume some minutes to wait for engine file, but it doesn’t exist.

The details below:
nvidia@nano:/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo$ deepstream-app -c deepstream_yolov5n.txt

Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine open error
0:00:01.272995576 9418 0x2d867ec0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed
0:00:01.273160476 9418 0x2d867ec0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.273195998 9418 0x2d867ec0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files

Loading pre-trained weights
Loading weights of yolov5n complete
Total weights read: 1881661
Building YOLO network

    Layer                         Input Shape         Output Shape        WeightPtr

(0) conv_silu [3, 640, 640] [16, 320, 320] 1792
(1) conv_silu [16, 320, 320] [32, 160, 160] 6528
(2) conv_silu [32, 160, 160] [16, 160, 160] 7104
(3) route: 1 - [32, 160, 160] -
(4) conv_silu [32, 160, 160] [16, 160, 160] 7680
(5) conv_silu [16, 160, 160] [16, 160, 160] 8000
(6) conv_silu [16, 160, 160] [16, 160, 160] 10368
(7) shortcut_add_linear: 4 [16, 160, 160] [16, 160, 160] -
(8) route: 7, 2 - [32, 160, 160] -
(9) conv_silu [32, 160, 160] [32, 160, 160] 11520
(10) conv_silu [32, 160, 160] [64, 80, 80] 30208
(11) conv_silu [64, 80, 80] [32, 80, 80] 32384
(12) route: 10 - [64, 80, 80] -
(13) conv_silu [64, 80, 80] [32, 80, 80] 34560
(14) conv_silu [32, 80, 80] [32, 80, 80] 35712
(15) conv_silu [32, 80, 80] [32, 80, 80] 45056
(16) shortcut_add_linear: 13 [32, 80, 80] [32, 80, 80] -
(17) conv_silu [32, 80, 80] [32, 80, 80] 46208
(18) conv_silu [32, 80, 80] [32, 80, 80] 55552
(19) shortcut_add_linear: 16 [32, 80, 80] [32, 80, 80] -
(20) route: 19, 11 - [64, 80, 80] -
(21) conv_silu [64, 80, 80] [64, 80, 80] 59904
(22) conv_silu [64, 80, 80] [128, 40, 40] 134144
(23) conv_silu [128, 40, 40] [64, 40, 40] 142592
(24) route: 22 - [128, 40, 40] -
(25) conv_silu [128, 40, 40] [64, 40, 40] 151040
(26) conv_silu [64, 40, 40] [64, 40, 40] 155392
(27) conv_silu [64, 40, 40] [64, 40, 40] 192512
(28) shortcut_add_linear: 25 [64, 40, 40] [64, 40, 40] -
(29) conv_silu [64, 40, 40] [64, 40, 40] 196864
(30) conv_silu [64, 40, 40] [64, 40, 40] 233984
(31) shortcut_add_linear: 28 [64, 40, 40] [64, 40, 40] -
(32) conv_silu [64, 40, 40] [64, 40, 40] 238336
(33) conv_silu [64, 40, 40] [64, 40, 40] 275456
(34) shortcut_add_linear: 31 [64, 40, 40] [64, 40, 40] -
(35) route: 34, 23 - [128, 40, 40] -
(36) conv_silu [128, 40, 40] [128, 40, 40] 292352
(37) conv_silu [128, 40, 40] [256, 20, 20] 588288
(38) conv_silu [256, 20, 20] [128, 20, 20] 621568
(39) route: 37 - [256, 20, 20] -
(40) conv_silu [256, 20, 20] [128, 20, 20] 654848
(41) conv_silu [128, 20, 20] [128, 20, 20] 671744
(42) conv_silu [128, 20, 20] [128, 20, 20] 819712
(43) shortcut_add_linear: 40 [128, 20, 20] [128, 20, 20] -
(44) route: 43, 38 - [256, 20, 20] -
(45) conv_silu [256, 20, 20] [256, 20, 20] 886272
(46) conv_silu [256, 20, 20] [128, 20, 20] 919552
(47) maxpool [128, 20, 20] [128, 20, 20] -
(48) maxpool [128, 20, 20] [128, 20, 20] -
(49) maxpool [128, 20, 20] [128, 20, 20] -
(50) route: 46, 47, 48, 49 - [512, 20, 20] -
(51) conv_silu [512, 20, 20] [256, 20, 20] 1051648
(52) conv_silu [256, 20, 20] [128, 20, 20] 1084928
(53) upsample [128, 20, 20] [128, 40, 40] -
(54) route: 53, 36 - [256, 40, 40] -
(55) conv_silu [256, 40, 40] [64, 40, 40] 1101568
(56) route: 54 - [256, 40, 40] -
(57) conv_silu [256, 40, 40] [64, 40, 40] 1118208
(58) conv_silu [64, 40, 40] [64, 40, 40] 1122560
(59) conv_silu [64, 40, 40] [64, 40, 40] 1159680
(60) route: 59, 55 - [128, 40, 40] -
(61) conv_silu [128, 40, 40] [128, 40, 40] 1176576
(62) conv_silu [128, 40, 40] [64, 40, 40] 1185024
(63) upsample [64, 40, 40] [64, 80, 80] -
(64) route: 63, 21 - [128, 80, 80] -
(65) conv_silu [128, 80, 80] [32, 80, 80] 1189248
(66) route: 64 - [128, 80, 80] -
(67) conv_silu [128, 80, 80] [32, 80, 80] 1193472
(68) conv_silu [32, 80, 80] [32, 80, 80] 1194624
(69) conv_silu [32, 80, 80] [32, 80, 80] 1203968
(70) route: 69, 65 - [64, 80, 80] -
(71) conv_silu [64, 80, 80] [64, 80, 80] 1208320
(72) conv_silu [64, 80, 80] [64, 40, 40] 1245440
(73) route: 72, 62 - [128, 40, 40] -
(74) conv_silu [128, 40, 40] [64, 40, 40] 1253888
(75) route: 73 - [128, 40, 40] -
(76) conv_silu [128, 40, 40] [64, 40, 40] 1262336
(77) conv_silu [64, 40, 40] [64, 40, 40] 1266688
(78) conv_silu [64, 40, 40] [64, 40, 40] 1303808
(79) route: 78, 74 - [128, 40, 40] -
(80) conv_silu [128, 40, 40] [128, 40, 40] 1320704
(81) conv_silu [128, 40, 40] [128, 20, 20] 1468672
(82) route: 81, 52 - [256, 20, 20] -
(83) conv_silu [256, 20, 20] [128, 20, 20] 1501952
(84) route: 82 - [256, 20, 20] -
(85) conv_silu [256, 20, 20] [128, 20, 20] 1535232
(86) conv_silu [128, 20, 20] [128, 20, 20] 1552128
(87) conv_silu [128, 20, 20] [128, 20, 20] 1700096
(88) route: 87, 83 - [256, 20, 20] -
(89) conv_silu [256, 20, 20] [256, 20, 20] 1766656
(90) route: 71 - [64, 80, 80] -
(91) conv_logistic [64, 80, 80] [255, 80, 80] 1783231
(92) yolo [255, 80, 80] - -
(93) route: 80 - [128, 40, 40] -
(94) conv_logistic [128, 40, 40] [255, 40, 40] 1816126
(95) yolo [255, 40, 40] - -
(96) route: 89 - [256, 20, 20] -
(97) conv_logistic [256, 20, 20] [255, 20, 20] 1881661
(98) yolo [255, 20, 20] - -

Output YOLO blob names:
yolo_93
yolo_96
yolo_99

Total number of YOLO layers: 260

Building YOLO network complete
Building the TensorRT Engine

NOTE: letter_box is set in cfg file, make sure to set maintain-aspect-ratio=1 in config_infer file to get better accuracy

WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
Building complete

ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine opened error
0:01:58.038919671 9418 0x2d867ec0 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1942> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/model_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT data 3x640x640
1 OUTPUT kFLOAT num_detections 1
2 OUTPUT kFLOAT detection_boxes 25200x4
3 OUTPUT kFLOAT detection_scores 25200
4 OUTPUT kFLOAT detection_classes 25200

0:01:58.102055146 9418 0x2d867ec0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/sources/DeepStream-Yolo/config_deepstream_yolov5n.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:180>: Pipeline running

**PERF: 20.61 (20.59)
**PERF: 20.38 (20.49)
**PERF: 20.48 (20.46)
**PERF: 20.60 (20.49)
**PERF: 20.50 (20.47)
**PERF: 20.53 (20.50)
**PERF: 20.57 (20.51)
**PERF: 20.59 (20.52)
**PERF: 20.49 (20.51)
**PERF: 20.44 (20.52)
**PERF: 20.53 (20.51)
**PERF: 20.54 (20.51)
**PERF: 20.57 (20.52)
**PERF: 20.58 (20.53)
** INFO: <bus_callback:217>: Received EOS. Exiting …

Quitting
App run successful

The config files below:

config_deepstream_yolov5n.txt:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#custom-network-config=yolov5s.cfg
#model-file=yolov5s.wts
custom-network-config=yolov5n.cfg
model-file=yolov5n.wts
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

deepstream_yolov5n.txt:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
#config-file=config_infer_primary.txt
config-file=config_deepstream_yolov5n.txt
[tests]
file-loop=0

1.Could you try to use the sudo deepstream-app -c deepstream_yolov5n.txtcli?
2.You need to confirm whether the generated engine name is model_b1_gpu0_fp32.engine.

Oh, I didn’t even notice, I shouldn’t have.
Thx for your help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.