I ported the yolo sample engine to test4, and an error occurred

Here is my error info:

Creating LL OSD context new
Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107
0:00:08.373808374 29087 0x55edfff88800 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:checkEngineParams(): Requested Max Batch Size is less than engine batch size
0:00:08.378620260 29087 0x55edfff88800 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:08.380138940 29087 0x55edfff88800 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in 'NvDsInferCreateNetwork' implementation
Yolo type is not defined from config file name:
0:00:08.380180006 29087 0x55edfff88800 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Failed to create network using custom network creation function
0:00:08.380197940 29087 0x55edfff88800 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:08.387968234 29087 0x55edfff88800 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:08.387987129 29087 0x55edfff88800 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Returned, stopping playback
Returned, stopping playback
Deleting pipeline

My config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
batch-size=1
network-mode=1
num-detected-classes=6
interval=0
process-mode=1
model-color-format=0
gie-unique-id=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

This is my folder structure:


I use the yolo example to convert the self-training weights to an engine, which can be run in the yolo example.

Creating LL OSD context new
Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107

Runtime commands:
h: Print this help
q: Quit

    p: Pause
    r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:189>: Pipeline ready

** INFO: <bus_callback:175>: Pipeline running

Creating LL OSD context new

**PERF: FPS 0 (Avg)
**PERF: 86.11 (86.11)
**PERF: 73.96 (74.19)

Hi,
Can you also specify which version of deepstream you are using ? I looks like DS 4.0.2. If yes, can you move to DS 5.0 ?

As you can see in the log,

Yolo type is not defined from config file name:

The custom parser fails to recognize which type of yolo model is being used since you have not mentioned the custom-network-config=yolov3.cfg property. You can try adding that in the nvinfer config file to fix this issue.

This has been fixed in DS 5.0 as well.

Indeed I use DS 4.0.2.I add custom-network-config=yolov3.cfgin config file ,and copy my yolov3.cfg into workspace.I’m not sure if it worked, and occurred the same error.I will try the 5.0 version .But considering the environment I have created, I hope to find a solution in 4.0.2.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
batch-size=1
network-mode=1
num-detected-classes=6
interval=0
process-mode=1
model-color-format=0
gie-unique-id=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
custom-network-config=yolov3.cfg

I think the role of yolov3.cfg is to create a network, but if I already have an engine file, do I still need yolov3.cfg to recreate network?

I built by DS5.0,and get the same error:

Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:11.131036692 8661 0x56519319fd60 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/model_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_83 33x13x13
2 OUTPUT kFLOAT yolo_95 33x26x26
3 OUTPUT kFLOAT yolo_107 33x52x52

0:00:11.131121438 8661 0x56519319fd60 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1518> [UID = 1]: Backend has maxBatchSize 1 whereas 3 has been requested
0:00:11.131135506 8661 0x56519319fd60 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1689> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/model_b1_gpu0_int8.engine failed to match config params, trying rebuild
0:00:11.135467700 8661 0x56519319fd60 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:934 failed to build network since there is no model file matched.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:872 failed to build network.
0:00:11.136938666 8661 0x56519319fd60 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:11.136957747 8661 0x56519319fd60 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:11.136968627 8661 0x56519319fd60 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings
0:00:11.144256664 8661 0x56519319fd60 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:11.144269640 8661 0x56519319fd60 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Returned, stopping playback
Deleting pipeline
[1] + Done “/usr/bin/gdb” --interpreter=mi --tty=${DbgTerm} 0<"/tmp/Microsoft-MIEngine-In-93l30cu0.r97" 1>"/tmp/Microsoft-MIEngine-Out-0xcf7dtb.gc6"

Can you also post your config file ?
Also please note this log here -

0:00:11.131121438 8661 0x56519319fd60 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1518> [UID = 1]: Backend has maxBatchSize 1 whereas 3 has been requested

The engine file you have generated is of batch size 1, whereas you are trying to infer with batch size 3. So nvinfer needs to rebuild the engine for which it needs both the cfg and model file. Can you specify both of them in your nvinfer config file ?

hey i got the same problem like u,how did you solve the problem?