TensorRT Workflow

Hi,

I have one basic question related to last part in the Tensor-RT work flow.
During the run-time/execution phase of the optimized engine.
As the engine runs inference tasks using input and output buffers on the GPU.

Is the Inference part is always a CUDA program and inference execution can be done on any GPU considering the compute capability of the GPU.

Thanks,
Sachin

Hi Sachin, please refer to this table from TensorRT documentation for the supported CUDA platforms:

https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#tensorrtworkflow