Hi, I’m working on converting a PyTorch/ONNX model to a TensorRT inference engine (at the moment I’m using the onnx2trt tool).
The target hardware is a NVIDIA Pascal P6000 GPU (edge device), but I don’t currently have access to this type of GPU. I’m working on AWS EC2 using V100 GPUs (or could optionally switch to K80 or G4 GPUs).
My question is, is it possible to optimize for a P6000 from either a V100/K80/G4, or is this a hard blocker and I’ll definitely need to obtain the target hardware beforehand?
Thanks!