TensorRT works on Python but not on C++

Hi,

I froze my model to a .pb file, and then converted my .pb file to a .uff file on my GPU (GTX1080ti). I used the

uff.from_tensorflow_frozen_model()

function as suggested in the examples.

When I created an engine using my .uff model with Python on GTX1080ti, the engine executed successfully. However, when I created an engine with the same .uff model with C++ on TX2, the engine failed, giving a consistently wrong inference result.

Can someone explain why does TensorRT work with Python on a different GPU and not with C++ on TX2?