Yolov5 giving wrong output

Setup Information:

• Hardware Platform: GPU
• DeepStream Version: 6.0
• TensorRT Version: 8.0.1
• NVIDIA GPU Driver Version: 470.103.01

I have modified deepstream_ssd_parser example from deepstream_python_apps to run yolov5. But, the outputs of the model are wrong. I have verified a few things:

  • The model is good, its giving predicting predictions when running independently.
  • The model gives correct predictions when it on Triton server, and running inference there via a client.

The problem is only when running the model via deepstream+triton (nvinferserver).

Please, help me figure out the issue soon.

Thanks in advance.

Never mind, I forgot pre-processing the frames before giving them to model. After adding pre-processing, the predictions are correct.

1 Like

@sandeep.yadav.07780
can you please share the details of the parser for yolov5 with deepstream_triton?

Sure, what exact details you want?

below is the preprocess we are using , is there any change required to pre-process


preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "labels.txt"
    other {}
  }
  custom_lib {
    path: "/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so"
  }

Below is the change we did for ssdparser.py file in the function nvds_infer_parse_custom_tf_ssd

num_detection_layer = layer_finder(output_layer_info, "output")

    score_layer = layer_finder(output_layer_info, "573")

    class_layer = layer_finder(output_layer_info, "625")

    box_layer = layer_finder(output_layer_info, "677")

Is there any other change we need to do ?

Yes, the input images should be normalised between range 0-1 before giving to the model.
Also, there is only “output” layer in yolo not “score_layer”, “class_layer”, “box_layer”. You can take only “output” layer and take the post-processing code from official yolo repo.

should the custom_lib also has to be changed ?

In my case, it’s working without using it. But, you can use libnvdsinfer_custom_impl_Yolo.so as a safe end.

did you comment out below lines ??? score layer , class_layer, box_layer

Yes

Hi @sandeep.yadav.07780
Can you please share ssdparser.py file with the changes you did for yolov5 in the function nvds_infer_parse_custom_tf_ssd?
It will be really helpful.
And how the bbox parser is generated?

Sure.

deepstream_yolo.py (15.5 KB)
yolo_parser.py (11.4 KB)

1 Like

@sandeep.yadav.07780
thanks sandeep for the parser, I will check into that.
there won’t be any custom_lib in the pbtxt right??
and what is model config for yolov5 I mean dstest_yolov5.pbtxt?

1 Like

Thanks a lot, Sandeep! :)

1 Like

Here it is

infer_config {
  unique_id: 5
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    trt_is {
      model_name: "Ensamble-yolov5n-onnx"
      version: -1
      model_repo {
        root: "./"
        log_level: 2
        tf_gpu_memory_fraction: 0.4
        tf_disable_soft_placement: 0
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    maintain_aspect_ratio: 0
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "yolov5n-onnx/labels.txt"
    other {}
  }

  extra {
    copy_input_to_host_buffers: false
  }

  custom_lib {
    path: "libnvdsinfer_custom_impl_Yolo.so"
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  interval: 0
}
output_control {
  output_tensor_meta: true
}
1 Like

Thanks @sandeep.yadav.07780

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.