YOLO v3 .engine.fp16 layer mismatch in deepStream inferencing

Complete information applicable to my setup -

Hardware Platform (GPU) - Tesla T4
DeepStream Version - Deepstream:6.1-Triton
TAO Toolkit Version - 5.0.0
TensorFlow version -1.15.5
TensorRT Version - 8.2.5-1+cuda11.4
NVIDIA GPU Driver Version - 535.183.01
CUDA Version: 12.2

I have one AWS vm which has that Nvidia Tesla T4 GPU.
I have trained yolo_v3 model in a the jupyter notebook of TAO framework.
That Gave me the model in .hdf5 format.
Then I have converted the .hdf5 model to .onnx format.
Then I have built a Deepstream Docker container on the same AWS vm.
Then I have exported that my model and tried to run the deepstream app in the Deepstream docker conatiner via the below command :-
deepstream-app -c app_config.txt

here is the deepstream app config file :
app_config.txt (3.2 KB)

here is the yolo_v3 config file :
yolov3_config.txt (986 Bytes)

here is the labels file :
labels.txt (220 Bytes)

Now I am getting error :
Command :
root@4e5e39ad1545:/opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3# deepstream-app -c app_config.txt

Error :
0:00:02.376014532 858 0x7771ec002380 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3/yolov3_resnet18_epoch_200_retrain_QAT_trtexec.engine.fp16
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT Input 3x384x1248
1 OUTPUT kINT32 BatchedNMS 1
2 OUTPUT kFLOAT BatchedNMS_1 200x4
3 OUTPUT kFLOAT BatchedNMS_2 200
4 OUTPUT kFLOAT BatchedNMS_3 200

0:00:02.395842644 858 0x7771ec002380 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3/yolov3_resnet18_epoch_200_retrain_QAT_trtexec.engine.fp16
0:00:02.453493076 858 0x7771ec002380 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.1/samples/models/yolo_v3/yolov3_config.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

    p: Pause
    r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:194>: Pipeline ready

Warning: Color primaries 5 not present and will be treated BT.601
** INFO: <bus_callback:180>: Pipeline running

ERROR: yoloV3 output layer.size: 4 does not match mask.size: 3
0:00:02.617463720 858 0x58613a5ec400 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:726> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)

Now Please tell me the solution to this .

Please follow official github GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream and retry. Also, the yolo_v3 config file can be found in deepstream_tao_apps/configs/nvinfer/yolov3_tao/pgie_yolov3_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub.

Hi,
I tried pulling the repo you shared, but it results in the following error, while running the make command.
I also tried pulling for different versions of tao and deepstream still getting the same error -

root@4e5e39ad1545:/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps# make
make -C post_processor
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/post_processor’
make[1]: Nothing to be done for ‘all’.
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/post_processor’
make -C apps
make[1]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/apps’
make -C tao_detection
make[2]: Entering directory ‘/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/apps/tao_detection’
g++ -c -o deepstream_det_app.o -I/opt/nvidia/deepstream/deepstream-6.1/sources/includes -I /usr/local/cuda/include pkg-config --cflags gstreamer-1.0 -std=c++14 deepstream_det_app.c
deepstream_det_app.c: In function ‘int main(int, char**)’:
deepstream_det_app.c:539:5: error: ‘NvDsGieType’ was not declared in this scope; did you mean ‘NvDsUnitType’?
539 | NvDsGieType pgie_type = NVDS_GIE_PLUGIN_INFER;
| ^~~~~~~~~~~
| NvDsUnitType
deepstream_det_app.c:552:53: error: ‘pgie_type’ was not declared in this scope
552 | RETURN_ON_PARSER_ERROR(nvds_parse_gie_type(&pgie_type, argv[1],
| ^~~~~~~~~
deepstream_det_app.c:489:35: note: in definition of macro ‘RETURN_ON_PARSER_ERROR’
489 | if (NVDS_YAML_PARSER_SUCCESS != parse_expr) {
| ^~~~~~~~~~
deepstream_det_app.c:552:32: error: ‘nvds_parse_gie_type’ was not declared in this scope; did you mean ‘nvds_parse_gie’?
552 | RETURN_ON_PARSER_ERROR(nvds_parse_gie_type(&pgie_type, argv[1],
| ^~~~~~~~~~~~~~~~~~~
deepstream_det_app.c:489:35: note: in definition of macro ‘RETURN_ON_PARSER_ERROR’
489 | if (NVDS_YAML_PARSER_SUCCESS != parse_expr) {
| ^~~~~~~~~~
deepstream_det_app.c:554:34: error: ‘pgie_type’ was not declared in this scope
554 | printf(“pgie_type:%d\n”, pgie_type);
| ^~~~~~~~~
deepstream_det_app.c:675:9: error: ‘pgie_type’ was not declared in this scope
675 | if (pgie_type == NVDS_GIE_PLUGIN_INFER_SERVER) {
| ^~~~~~~~~
deepstream_det_app.c:675:22: error: ‘NVDS_GIE_PLUGIN_INFER_SERVER’ was not declared in this scope
675 | if (pgie_type == NVDS_GIE_PLUGIN_INFER_SERVER) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
make[2]: *** [Makefile:63: deepstream_det_app.o] Error 1
make[2]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/apps/tao_detection’
make[1]: *** [Makefile:24: all] Error 2
make[1]: Leaving directory ‘/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/apps’
make: *** [Makefile:25: all] Error 2

Seems that it is related to deepstream error. For this new error, could you please create a new topic in deepstream forum for better help? Thanks.

Thank you.