Since our models heavily utilize unsupported TF layers, converting our TF Model to a UFF does not seem feasible. Instead, we were thinking of trying to get TensorFlow Serving working on the jetson, to act as a mini server for model inference.
Has anyone done this yet? I’ve seen examples of installing TensorFlow on the Jetson so I assumed it might be possible to install TensorFlow Serving as well.
However, I run in issues building TF Serving with Bazel, and have exhausted my ability to narrow down the problem.
So far I have:
Installed all pre-reqs
Installed bazel
cloned TF Serving and attempted to build it from source. I run into an issue which is similar to memory issues I’ve seen around the forums/github pages and have tried to confine the resources used during the build, but nothing works.
The error is:
Linking of rule ‘//tensorflow_serving/model_servers:tensorflow_model_server’ failed (Exit 1).
bazel-out/local-opt/bin/external/aws/_objs/aws/external/aws/aws-cpp-sdk-core/source/client/ClientConfiguration.o:ClientConfiguration.cpp:function Aws::Client::ComputeUserAgentString(): error: undefined reference to ‘Aws::OSVersionInfo::ComputeOSVersionStringabi:cxx11’
collect2: error: ld returned 1 exit status
Does anyone have experience attempting / successfully installing TensorFlow Serving on a Jetson?
I met the same problem when trying to install tensorflow serving on TX1
I’ve already tried to install tf-serving on TX1 for over a week, solved many problems met, but stuck here
Hi i also want to get tensorflow serving on the jetson tx2.
Did you manage to get it working?
I’m having a problem with installing Bazel, i’m getting the error:
Uncompressing…/home/nvidia/bin/bazel: line 88: /home/nvidia/.bazel/bin/bazel-real: cannot execute binary file: Exec format error.
You’re probably using a desktop PC architecture (x86_64) file and not one compiled for this architecture (arm64/aarch64/ARMv8-a). What do you see from:
file /home/nvidia/.bazel/bin/bazel-real
A typical problem is from copy of files on the PC to the Jetson without rebuilding for the Jetson’s different architecture.
If the code is released in source format, then you can probably recompile it on the Jetson (some programs have architecture dependent code, but most don’t).
hi, ericzli. Could you please share the detailed steps for successful installation? I’m trying to install tensorflow serving with GPU supporting on Jetson Nano. THX.
In case you don’t want to build the images yourself they are now published for Xavier and Nano on DockerHub - see Docker Hub and Docker Hub - the Xavier variant should run on TX2.
To allow access to the GPU from inside the Docker container you need to mount the following devices
/dev/nvhost-ctrl
/dev/nvhost-ctrl-gpu
/dev/nvhost-prof-gpu
/dev/nvmap
/dev/nvhost-gpu
/dev/nvhost-as-gpu
With docker run this can be easily achieved like so: docker run --device=/dev/nvhost-ctrl --device=/dev/…
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
ERROR: error loading package '': Encountered error while reading extension file 'third_party/toolchains/preconfig/generate/archives.bzl': no such package '@org_tensorflow//third_party/toolchains/preconfig/generate': type 'repository_ctx' has no method patch()
ERROR: error loading package '': Encountered error while reading extension file 'third_party/toolchains/preconfig/generate/archives.bzl': no such package '@org_tensorflow//third_party/toolchains/preconfig/generate': type 'repository_ctx' has no method patch()
INFO: Elapsed time: 33.556s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
The command '/bin/sh -c bazel build --color=yes --curses=yes --jobs="${JOBS}" --verbose_failures --output_filter=DONT_MATCH_ANYTHING --config=cuda --config=nativeopt --config=${JETSON_MODEL} --copt="-fPIC" tensorflow_serving/model_servers:tensorflow_model_server' returned a non-zero code: 1
Any idea on what I should be paying attention to ?