Python GPU-accelerated CV library (whether OpenCV or other)?

Is there any GPU-accelerated computer vision library available on the TX2 that can be programmed with Python? I’d prefer OpenCV just from a familiarity standpoint, but that’s less important than getting GPU acceleration. (It seems to me that, if not, I’ll have to prototype in non-accelerated Python, port what I develop to C/C++, and write a Python-callable wrapper, all of which is a drag on development.)

opencv with cuda(GPU) is avaliable on TX2

What kind of computer vision?

Caffe can run in Python, and it’s accelerated on the TX2.

The OpenCV4Tegra library that comes with the Jetson by default doesn’t seem to have Python bindings, but you can build your own with CUDA support, which might be less optimized than the NVIDIA version, but at least support Python (and still be more optimized than no CUDA):

@gyuhyong – Are you saying that calls from Python to the cv2 module have built-in CUDA-acceleration on the TX2? Is there a way to confirm that, e.g., a Python call to getCudaEnabledDeviceCount()? Is there any example code that demonstrates this acceleration?

@snarky – Deep learning is indeed the end-game, but is it the case that Caffe’s vision capabilities are CUDA-accelerated, or is it that the OpenCV calls used by Caffe will be CPU-based? If, for instance, I want a pipeline that does “opencv.capture -> opencv.blur -> opencv.threshold -> opencv.edges ->” is the whole pipeline accelerated or just the foo?

On my TX2 i have build OpenCV 3.2.0 for python3 with cuda support
but if i run

import cv2

it show error
is there any other ways to check it?


Do you compile opencv with pascal architecture?

For example,
Follow this page but replace CUDA_ARCH_BIN=“5.3” with CUDA_ARCH_BIN=“6.2”?