I would like to use Tensorflow Serving inside a Docker container on the Xavier NX. (The NX is intended to be used as a TF model server for other devices via REST or grpc)
Since the jetson arch seems to be not supported yet, is there a workaround or will there be an official support in the future?
Or is there another recommended way for preparing the NX as model server (maybe via triton inference server)?
Thanks in advance!
Sorry that we don’t officially provide TensorFlow Serving for Jetson.
But you can build it from source on your own.
Please find the detailed steps here: