Hello, I am reaching out to inquire about the possibility of performing inferences on Jetson hardware using a PyTorch model, but without employing a DeepStream pipeline, while being able to take advantage of the GPU cores.
Thank you in advance for your support!
Hi,
It’s possible.
On Jetson, we recommended to use TensorRT for inference. We have several examples that deploy a PyTorch model with TensorRT in the below link:
Thanks.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.