Also following may help you,
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Should pruning a model prior to converting it to tensorRT make inference faster? | 12 | 2824 | October 18, 2021 | |
TensorRT with pruned model | 4 | 821 | April 20, 2022 | |
Does weight pruning help improve the inference speed of pruned models on TX2? | 1 | 507 | August 2, 2019 | |
Channel pruning on TensorRT does not get speed up | 2 | 615 | June 29, 2021 | |
Techniques to Imporve TensorRT Model Inference Speed | 0 | 16 | April 30, 2025 | |
EfficientNetB5 on jetson nano? | 8 | 1256 | December 7, 2021 | |
Inference time of tensorrt 6.3 is slower than tensorrt 6.0 | 7 | 916 | October 12, 2021 | |
Speed up or measure progress of the network profiling/building phase | 3 | 483 | May 24, 2022 | |
Inference time increases rapidly when set a high resolution input image | 1 | 805 | September 13, 2023 | |
List of all methods of getting accelerated computing on Jetson Xavier | 5 | 490 | October 18, 2021 |