I see that OpenVX 1.2 has some neural net optimisations in. When will VisionWorks support that?
Hi jetsonnvidia, we don’t plan for future upgrades of VisionWorks to OpenVX 1.2. It is recommended to eventually migrate to the new VPI interface (which a beta was included with JetPack 4.3 and more features and optimizations will be added to in future releases), the NPP library, and TensorRT for DNN inferencing.
Okay… I have been trying to understand OpenVX as it is used heavily by VisionWorks.
Should I still be learning about OpenVX? Does VPI use OpenVX?
Please can you provide some links for learning VPI?
Additionally, please can you confirm that VPI is for Jetson Nano?
The release notes for Jetpack 4.3 state that it is only for Jetson Xavier. Thanks.
VPI doesn’t use OpenVX - you can find the docs for VPI here: https://docs.nvidia.com/vpi/index.html
You can also find the VPI samples on your Jetson (running JetPack 4.3) under /opt/nvidia/vpi/vpi-0.1/samples
VPI is for Jetson Nano/TX1/TX2/Xavier, and includes backend engines for GPU (CUDA), CPU, and PVA (Xavier vision accelerator)
What the docs state is only supported on AGX Xavier is the PVA engine, because PVA hw only exists on AGX Xavier. Hence the other Jetson platforms (such as Nano) only have the GPU and CPU backends available in VPI. And as noted in the release notes, the GPU and CPU implementations are not optimized for performance in this preview release of VPI. A future release will bring performance optimized GPU and CPU implementations.
Okay, well I’m going to forget about Nvidia frameworks (except OpenCV on CUDA) until Nvidia can sort themselves out and release something stable.
Why VisionWorks will be deprecated? I find the OpenVX is quite good for triditional vision applications. Please at least keep the old VIsionWorks in jetpack, and I can extend it by myself.
Hi @cnhzcy14, we don’t plan to remove VisionWorks from JetPack anytime soon, however when VPI is mature and has parity with VisionWorks you should consider migrating over to that instead.