Is there any method to do inference with Tensorflow C/C++ api on Jetson TX2? I use the method in this post https://devtalk.nvidia.com/default/topic/1038957/jetson-tx2/tensorflow-for-jetson-tx2-/ to install Tensorflow on TX2, but there is no Tensorflow c_api.h file can be found.
Hi, yes. I’ve the same concern. Is there any way to run custom tf models on tx2? Layers in my model aren’t yet supported by TensorRT so I can’t use its c++ api either. How do I run an almost realtime inference on tx2 without TensorRT? Is there any official or even unofficial way to do inference with Tensorflow C/C++ api on Jetson TX2?
Our official TensorFlow package is only built with python interface.
Some users wants to build TensorFlow C++ library on Jetson but meet some AWS issue:
AFAIK, there is no TensorFlow C++ libraries available on Jetson now.
A recommended way is to inference your model with TensorRT and implement the non-supported layer with our plugin API.
So when will there be support for C++ libraries on the TX2?
Any updates on this? I’m trying to compile tensorflow 1.13.1 from source on the TX2 and am having issues with Eigen.
Eigen/src/Core/arch/GPU/PacketMath.h(152): error: calling a __device__ function ("__int_as_float") from __host__ __device__ function("eq_mask") is not allowed Eigen/src/Core/arch/GPU/PacketMath.h(152): error: calling a __device__ function ("__longlong_as_double") from __host__ __device__ function("eq_mask") is not allowed output 'tensorflow/core/kernels/_objs/inplace_ops_gpu/inplace_ops_functor_gpu.cu.pic.o' was not created
I didn’t find the solution to develop with C/C++ APIs of Tensorflow on Jetson TX2. Actually, I tend to use MXNet on Jetson TX2, and it works well so far.