The story:
- I have a Jetson TX2 to run a model on (to detect things in images)
- I'm given a model, which is converted to UFF.
- I use bin2c.py to produce sorta C code to include in my sampleUffMNIST.cpp (heavely patched sample file)
- I use sample build setup to produce the binary out of that sampleUffMNIST.cpp (right on that Jetson)
- I run that binary and it waits on AF_UNIX socket...
- I run the feed.py which grabs the images, feeds it to sampleUffMNIST.cpp over that local socket and
- feed.py fetches output from sampleUffMNIST.cpp (which is expected to be a matrix of "probabilities" for a pixel to belong to an object)
- the matrix is then applied to the source image and we get the image with everything masked out except of objects found.
I have a mockup in Python3 (just rewritten the same original sampleUffMNIST.cpp) on a PC with GPU and it works fine!
But I have no TensorRT Python binding on Jetson.
Hence, I have to code in C++ (which is not my favorite by any mean) to run the engine.
This doesn’t work. I get garbage on the output.
Here we are.
The details:
- Jetson TX2 running tegra-ubuntu 4.4.38-tegra
- tensorrt 4.0.2.0-1+cuda9.0 package on TX2
- tensorrt 5.0.0.10-1+cuda9.0 package on PC/GPU
- the source code for mentioned files is available here
Any help would be appreciated!