TensorRT and OpenVX compatibility problem


I realized that Jetson Xavier can run OpenVX application.
But I am wondering if OpenVX and TensorRT have any compatibility or API to use TensorRT engine (or inference process) as a node in OpenVX?

If I misunderstand any thing please let me know.


TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @disculus2012,

I think it doesn’t have. But you can try wrap up them.

Thank you.