Hi, all.
I am using tensorRT to accelerate model-infer-process, but i found that different models of GPU support different mode
For example, gtx1080ti only support float32 and int8, not support float16
what about other models of GPU ? p100, p4, V100 and so on.
Where can i get the compete official information about this?
What is more, what kind of GPU support DLA? (all i know is gtx1080ti not support)