With that size I ll have thought it will be faster :D
I am using my own dataset. (50k annoted images total, 5h by epoch with titan rtx)
Thanks, remember that when I go from fp32 to fp16 the inference speed didnt evolve. It acted like it was capted.