CUDA Surfaces as input for TensorRT model

Description

I am utilizing a D3D12 interop and mapped a texture to a CUDA surface. Is it possible to give that surface directly to TRT or do the surface contents need to be copied to linear memory for TRT to be able to understand the input?

Environment

TensorRT Version: 8.6
GPU Type: RTX 4090
Nvidia Driver Version: 537.13
CUDA Version: 12.2
CUDNN Version: 8.9
Operating System + Version: Windows 11
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Baremetal

Relevant Files

N/A

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

It is not possible to give a CUDA surface directly to TensorRT as input. TensorRT requires that all inputs and outputs be in linear memory. Copy the data from the CUDA surface to the linear memory buffer using CUDA memory copy functions like cudaMemcpy.