Same inference speed with Resnet50 for int8 and fp16
|
4
|
708
|
October 18, 2021
|
Int8 is not faster than fp16 on xavier
|
5
|
780
|
October 18, 2021
|
Same inference speed for INT8 and FP16
|
10
|
5934
|
October 12, 2021
|
After converting ssdMobilnet from the examples, the model is slower
|
4
|
510
|
October 18, 2021
|
Human pose detection model (MoveNet) TensorRT conversion on NVIDIA Jetson
|
7
|
2723
|
June 16, 2022
|
Tensorrt can not speed up well
|
7
|
1679
|
June 29, 2022
|
Inference Time is not stable
|
10
|
1789
|
January 3, 2019
|
Lower performance with TRT than plain TF?
|
14
|
2030
|
October 18, 2021
|
How can we know we have convert the onnx to int8trt rather than Float32?
|
23
|
1921
|
June 14, 2021
|
Low Compute utilization of converted TensorFlow model during inference
|
19
|
1749
|
October 18, 2021
|