I executed one of the sample python programs and the script name is sample.py from the directory
/usr/src/tensorrt/samples/python/network_api_pytorch_mnist/
I have a clarification with this statement below. Wanted to check whether the do_inference() is executed on the CPU or GPU. If it is on the CPU, then wanted to know how to changed the code to perform inference on the GPU.
[output] = common.do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream)
Hi,
TensorRT uses GPU for inference.
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.