I’m trying to run a trained TensorFlow model with TensorRT on my TX1.
I properly flashed/Installed my Jetson TX1 and host system with the latest JetPack 3.2.
I trained my TensorFlow model on my host and created an UFF file, which I now want to run for inference with TensorRT on my TX1.
Therefore I’m trying to load the UFF file on my TX1 through the C++ API to create an TensorRT engine on the TX1. The loading and conversion of the UFF is done with a parser, which is according to the tutorials done as follows:
...
#include "NvUffParser.h"
#include "NvUtils.h"
using namespace nvinfer1;
using namespace nvuffparser;
int main(int argc, char** argv)
{
auto parser = createUffParser();
...
}
The problem I’m encountering is that there is no NvUffParser.h on my Jetson system, so the build fails. On my host System (Ubunutu 16.04, after TensorRT installation there is a NvUffParser.h (located in installationfolder/include)).
On Jetson there is only NvUtils.h, NvInfer.h (usr/include/aarch64-linux-gnu/) and so on. Even the sample “samleUffMNIST” is missing on the Jetson, but is in the samples folder on my host.
Is there something wrong with my Jetson installation?
Or if else how do i correctly create an TensorRT Engine / load from UFF on my Jetson TX1 with C++?
Thanks in advance.