Hardware Platform: DRIVE AGX Pegasus
I found that tensorRT of Mnist model(“hello world” model in Machine Learning) is faster than tftrt mnist model almost 3 times. I have been reading some of the posts and it looks like standalone trt performs better than tf-trt.
- What is the reason behind that? Does this statement always hold true, for all the cases?
- If that is the case, is it possible to convert every tf-trt model to trt by writing using plugin API for unsupported layers?