Hello,
I’m wondering whether it is possible to run Inference Server on Jetson TX2 since it has all the frameworks included out-of-the-box and should eliminate building tensorflow C++ from source that is pretty cumbersome on TX2?
I’m interested in running custom object detection models which I couldn’t convert into .uff/.plan format to run in pure TRT, and it seems that C++ tensorflow isn’t supported on TX2. So I ran into this sentence: “It is possible to execute your TF-TRT accelerated model using TensorRT’s C++ API or through the TensorRT Inference Server, without needing TensorFlow at all.” on this link Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation and it seems as a possible solution.
Thank you very much.