There is a problem that the prediction results are missing on tao yolo v4 tiny

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : Jetson AGX Xavier
• DeepStream Version : 5.1
• JetPack Version (valid for Jetson only) : 4.6 (cuda and trt version match jetpack 4.5.1)
• TensorRT Version : 7.1.3
• Issue Type( questions, new requirements, bugs) : questions

I tested detection performed through deepstream + tao yolo v4 tiny, but there is a problem that detection is omitted at the edge of the screen.

  • DetectNet :
    • tao / deepstream-app : working
  • YOLO :
    • tao inference / nvinfer - custom_parser : working
    • deepstream-app result : detection is omitted at the edge of the screen.

      custom_parser

These are using config files.
pgie_config_yolo_v4.txt (766 Bytes)
source.txt (3.1 KB)

Why is this problem happening and how can it be solved?

thanks.

Hi,

First, it’s recommended to upgrade your software to the latest JetPack 4.6.1 and Deepstream 6.0.1.

There is a configuration to control the ROI output offset.
The default value is set to 200, indicating the bounding box needs to be away from the top/bottom 200 pixels.

Would you mind updating the following two configures to see if it work?
https://docs.nvidia.com/metropolis/deepstream/6.0/dev-guide/text/DS_plugin_gst-nvinfer.html#id3

[class-attrs-all]
pre-cluster-threshold=0.5
roi-top-offset=0
roi-bottom-offset=0

Thanks.

Thank you for your reply!

I will check and reply again.

I have the same behaviour with DS 6.0.1 and the ROI properties with 0px. Detections are omitted too.

Hi, labar90

Have you applied the flag shared above?
If not, would you mind giving it a try?

Thanks.

I tested it by 1) upgrading jetpack and deepsteam version and 2) adding the roi-{top, bottom}-offset option. The problem was not solved.

The problem has been resolved by applying the answers and further modifying the custom parser!

deepstream_tao_apps/post_processor/nvdsinfer_custombboxparser_tao.cpp
 (NvDsInferParseCustomBatchedNMSTLT)

[line 168]
object.height = CLIP(p_bboxes[4*i+3] * networkInfo.height, 0, networkInfo.height - 1) - object.top;
=>
object.height = CLIP(p_bboxes[4*i+3] * networkInfo.height, 0, networkInfo.height - 4) - object.top;

----------
networkInfo.height {- 3 ~ -1} : omitted
networkInfo.height {... ~ - 4} : not omitted

Thanks!

Thanks for sharing this.
Good to know you fix this now.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.