Performance deep learning + stereo correspondence

Hi,
I am working with images of size 2560x1440x3,
for yolov4 I guess it does a resize and the execution time is 200ms, I am using this AlexeyAB for yolov4 but with c++ code, while for stereo correspondence with parameters :

min_disparity = 0
max_disparity = 64
P1 = 8
P2 = 109
sad = 5
bt_clip_value = 31
max_diff = 32000
uniqueness_ratio = 0
ct_win_size = 0
hc_win_size = 1
scanlines_mask = 255
flags = 2

takes 250 ms using both I can work at 500 ms per frame.
I wanted to know if this is normal or if there is a way to decrease the time? I am already using MODE 15W 6CORE and jetson_clocks.
Thanks.

Hi,

Sorry that we don’t have a benchmark result for your use case.

But we have some score for the YOLOv4 pipeline shared in this repository.
We can reach 57.74 fps on an Xavier board. (network size is 512x320x3)

In case you don’t know, you can maximize the XavierNX performance with the below command:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Thanks.

I don’t run Yolov4 with DeepStream or TensorRT,
I use the c++ code from here https://github.com/xyl3902596/Yolov4detect, how do I change the input size of the network?
thanks

Hi,

Sorry that we cannot open the source you shared above.
Could you check the link for us?

For a darknet-based model, you can change the network size by updating the .cfg file directly.

Thanks.