Hi, after converting the segformer’s fan_hybrid_tiny.pth classification model file to onnx format for checking the model inference, I got the low confidence score like less than 1 %. So, i thought to use the exact model’s architecture while converting the pth file to onnx file. but i couldn’t find the exact architecture code. Please let me know where can i find the exact architecture and also any suggestion in improving the accuracy
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Can't conver model to ONNX | 2 | 458 | April 26, 2023 | |
Why inference in jetson nano with fp16 is slower than fp32 | 9 | 1938 | September 5, 2021 | |
Segmentation fault when using TensorRT to compile a model | 1 | 1384 | June 27, 2022 | |
TensorRT INT8 inference, the result is totally wrong! | 7 | 837 | May 13, 2020 | |
TensorRT model inference result is not correctly | 1 | 651 | July 1, 2022 | |
TensroRT support for SegNet Architecture | 8 | 1040 | January 16, 2020 | |
TensorRT 8 : C++ inference gives different results compared to tensorflow python inference | 7 | 1358 | October 5, 2021 | |
Converting .onnx model to int8 | 1 | 682 | August 1, 2023 | |
Converted model is broken if half precision with dynamic batch size and batch size is greater than 1 | 11 | 2401 | October 18, 2024 | |
Very low confidence score inference when using DeepStream with SRCFD | 4 | 472 | July 13, 2023 |