VPI and VisionWorks questions

Hi,

I have a few questions regarding the VPI and Visionworks.

https://docs.nvidia.com/vpi/index.html
https://developer.nvidia.com/embedded/visionworks

  1. It seems that these two frameworks have similar functionality, does Nvidia recommend using one of them on the jetson xavier?
  2. Is it possible to use functions form Visionworks (such as NPP https://docs.nvidia.com/cuda/npp/index.html) in VPI or the other way around?
  3. regarding the VPI pipline

https://docs.nvidia.com/vpi/architecture.html#arch_complex_pipeline

are there any samples that implement such pipeline that can be compiled and run?

Thanks,
Gabi

Hi,

1. YES. Please use VPI.
VisionWorks is our legacy vision library and we don’t have an update plan for it currently.

2. The VPI example combines with OpenCV library as an image reader.
Suppose you can follow the similar steps to combine VPI with VisionWorks/NPP.

3. You can check this sample for more information:
/opt/nvidia/vpi/vpi-0.1/samples/02-stereo_disparity

Thanks.

I tested the VPI cuda example for disparity and VisionWorks example for disparity.
The VPI cuda runs very slow for some reason on the other hand the VisionWorks runs in real-time.
Also the PVA disparity is limited to the size of 480x270.
What does Nvidia say about that?

I also saw that nvidia has in cuda samples a stand alone implementation of disparity, are all of those semi global block matching disparity implementations the same(stand alone, VPI, VisionWorks)?

Before calling the disparity i am using a few npp functions, is there a way to use npp outputs which are already allocated on GPU memory as inputs for the VPI or VisionWorks disparity without passing them to the Host memory and then again passing to the GPU allocated memory?

Thanks,
Gabi

Hi,

Current VPI sample targets for PVA, the CUDA and CPU version is not yet optimized.

The resolution of disparity estimator is a limitation from PVA.
We are discussing for the possibility to support more resolution but no concrete plan yet.

Please noticed that PVA is extra hardware which tends to offload the workload from GPU.
So the ideal use cause is leverage PVA for vision task and use GPU for other task instead.

Thanks.

Hi,

I understand that PVA is for offloading the GPU from computer vision leverage,therefore it should have some flexibility in image sizes it supports. please consider adding the following standard resolutions:
640x360
640x512
1280x720 (720p)
Also i couldn’t see the remap function exists in NPP libraries for CUDA, please consider adding it as well.
When do you expect to have an optimized VPI framework?

Thanks.

Hi,

Thanks for your feedback.
I will pass your request to our internal team.

And sorry that we cannot disclosure our schedule here so please pay attention to our announcement for the update.
Thanks.