Building inference engine without target hardware?

Hi, I’m working on converting a PyTorch/ONNX model to a TensorRT inference engine (at the moment I’m using the onnx2trt tool).

The target hardware is a NVIDIA Pascal P6000 GPU (edge device), but I don’t currently have access to this type of GPU. I’m working on AWS EC2 using V100 GPUs (or could optionally switch to K80 or G4 GPUs).

My question is, is it possible to optimize for a P6000 from either a V100/K80/G4, or is this a hard blocker and I’ll definitely need to obtain the target hardware beforehand?

Thanks!

The generated plan files are not portable across platforms or TensorRT versions.
Plans are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be re-targeted to the specific GPU in case you want to run them on a different GPU.

Thanks

1 Like