TensorRT model performance on desktop gpu

We’re converting our model from keras unet model on TF1.15 trained on desktop GPU to be deployed on Jetson TX2. Before the TX2 is online, we did a test and see how much faster is the TensorRT models are (float32, float16) on the desktop GPU and surprised to see no real difference. Is that normal or we only see the difference on Jetson devices?

Thanks.

Kelvin

Hi,

The result is different based on the hardware architecture.
Which GPU do you use?

Thanks.

We use a Titan RTX.

Kelvin

Hi,

What kind of TensorRT do you use?
Do you use pure TensorRT or the TensorRT integrated into TensorFlow?

If you are using pure TensorRT, would you mind to share the model with us so we can give it a try?
It should be an uff or ONNX format file.

Thanks.