I have a Jetson TX2 to run a model on (to detect things in images)
I'm given a model, which is converted to UFF.
I use bin2c.py to produce sorta C code to include in my sampleUffMNIST.cpp (heavely patched sample file)
I use sample build setup to produce the binary out of that sampleUffMNIST.cpp (right on that Jetson)
I run that binary and it waits on AF_UNIX socket...
I run the feed.py which grabs the images, feeds it to sampleUffMNIST.cpp over that local socket and
feed.py fetches output from sampleUffMNIST.cpp (which is expected to be a matrix of "probabilities" for a pixel to belong to an object)
the matrix is then applied to the source image and we get the image with everything masked out except of objects found.
I have a mockup in Python3 (just rewritten the same original sampleUffMNIST.cpp) on a PC with GPU and it works fine!
But I have no TensorRT Python binding on Jetson.
Hence, I have to code in C++ (which is not my favorite by any mean) to run the engine.
Let’s try the low hanging fruit first. Can you try updating to jetpack 4.1 which contains TRT5 (matching your desktop configuration) and see if results improve?