I converted YOLOv7 with end2end option, it means that NMS is included in the model. To run in Deepstream, I comment out parse-bbox-func-name
and custom-lib-path
(as my understanding, it is used for postprocess included NMS) in the attached config file. But I got an error as follow:
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kINT32 num_dets 1
2 OUTPUT kFLOAT det_boxes 100x4
3 OUTPUT kFLOAT det_scores 100
4 OUTPUT kINT32 det_classes 100
0:03:25.403890318 111 0x5586ac3f98a0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/sources/yolo_deepstream/deepstream_end2end/config_infer_primary_yoloV7.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:239>: Pipeline ready
** INFO: <bus_callback:225>: Pipeline running
0:03:26.099503216 111 0x5586ab760300 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:03:26.099567854 111 0x5586ab760300 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)
I saw that the model gave the final bboxes, score and class and there is error. How I can use end2end model in Deepstream? Thanks
config_infer_primary_yoloV7.txt (3.6 KB)