Duplicated with Why inference in jetson nano with fp16 is slower than fp32 - Jetson & Embedded Systems / Jetson Nano - NVIDIA Developer Forums
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
TF/Keras inference 4 times faster with FP32 precision than with FP16 | 8 | 2685 | October 18, 2021 | |
Why inference in jetson nano with fp16 is slower than fp32 | 9 | 1958 | September 5, 2021 | |
Does nano deep learning support fp16 | 2 | 544 | October 15, 2021 | |
FP16 does not decrease inference time on Jetson Nano | 6 | 1211 | August 23, 2022 | |
No performance improvement on Jetson Nano FP16 vs FP32 | 6 | 2691 | February 22, 2021 | |
Inference using FP16 and FP32 precision giving no performance gain on Jetson Nano | 2 | 1351 | October 14, 2021 | |
Jetson orin nano fp16/int8 performance | 8 | 462 | March 18, 2025 | |
Speed of FP32 vs FP16 | 4 | 1359 | October 12, 2021 | |
Jetson Nano 16bits vs 32 bits inference performance | 2 | 653 | April 18, 2023 | |
mobilenet v1 inference | 4 | 938 | October 14, 2021 |