I have a model frozen.pb where there are some operations which are not supported by tensorrt. I can run it with tf-trt(python) on xavier with tensorRT 5.0. I have some probloems.
Is there a c++ version of tf-trt? I only know that I can run it in uff formate with c++. I need to code plugins when I use uff. Is there a c++ tf-trt API for me to run frozen.pb optimized with trt so that I do not need to code plugins?
I load the two model : frozen.pb and xx.caffemodel both of which have unsupported operations/layers by tensorrt.Is there a way to run the two model parallelly?
Maybe frozen.pb should run with tf-trt(python), and xx.caffemodel run with plugin(python) if there are plugin python API. Is this feasible?
- If I can do that, what should I do to control the two models run parallelly?