API which does run model inference on Jetson Orin

I am looking for the C++ API to which does the model inference on Jetson Orin. Could the Nvidia developer support team please help point to the API documentation or refer me to the specific API?


tensorrt API enqueue is the one used for making inferences. Hence marking this question is resolved.

Glad to know you resolved the issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.