infrence time tensorrt px2

hi

we have two machines:

1.GPU: 1070ti and CPU: i7-8700k
2.PX2

The same model inference on host is three times faster than px2.

One is 17ms, the other is almost 60ms.

Both iGPU and dGPU on px2 are same speed.

Why are there such a big gap?

Hi,

May I know which model/sample you are using for testing?
To benchmark the performance, it’s recommended to use our built-in sample for testing.
Ex. /usr/src/tensorrt/samples/sample_googlenet

Thanks.

We are using SSD300 model(TensorRT).

Hi,

Thanks for your feedback.

Have you executed other GPU related app at the same time?
Ex. hyperionlauncher

If yes, please kill the app and try it again.
Thanks.