GPU requirements to run create_inference_graph using TensorRT (trt) in tensorflow.

Hello. I understand that all CUDA capable devices can run the optimised graph, but do I require specific GPUs to optimise the graph first?

I have tried optimising a graph to precision mode ‘FP16’ on a machine with an Nvidia Geforce GTX 1050M. The graph storage size on the system is the same as before running the optimisation step, and the performance is the same as well.

It does seem to make a difference. There is some visible processing when running on an Nvidia Tesla P100 (16GB).

Hello,

A generated TensorRT PLAN is valid for a specific GPU — more precisely, a specific CUDA Compute Capability. For example, if you generate a PLAN for an NVIDIA P4 (compute capability 6.1) you can’t use that PLAN on an NVIDIA Tesla V100 (compute capability 7.0).

In this case, P100 has compute cap 6.0, and Geforce 1050 has compute cap 6.1

Ah. Thank you for the info.

There are many articles online on using TensorRT to optimise a graph for inference, for use on the Jetson Nano (compute cap 5.3). Is this possible?

Example: https://www.dlology.com/blog/how-to-run-keras-model-on-jetson-nano/