VisionWorks + Inference

Hi Milind,

You are correct that Deep Vision Inference API excepts CUDA device memory pointer.
The following are possible ways for using VisionWorks and CUDA together:

  1. Allocate an image with vxCreateImage() and obtain the pointer with vxMapImagePatch()
  2. Make a VisionWorks User Custom Node that handles the CUDA / Deep Vision processing
  3. Use vxMapImagePatch() function with NVX_MEMORY_TYPE_CUDA flag to obtain the CUDA device pointer from opaque handle
  4. Use the NVX CUDA API to access the device pointer directly

Download the VisionWorks Documentation and navigate to the following pages:

NVIDIA_VisionWorks_Reference/nvvwx_docs/group__nvx__tutorial__user__custom__node.html
NVIDIA_VisionWorks_Reference/nvvwx_docs/group__nvx__tutorial__cuda__interoperability__1.html
NVIDIA_VisionWorks_Reference/nvvwx_docs/group__group__image.html#ga7f680c04462fcb05a0eae1a96dd923e3
NVIDIA_VisionWorks_Reference/nvvwx_docs/group__nvx__tutorial__cuda__api__1.html