Can the Xavier run OpenCL applications?

Question pretty much says it all, can the Xavier run OpenCL applications?

Based on this post https://devtalk.nvidia.com/default/topic/1010166/jetson-tx2/does-jetson-tx1-or-tx2-support-opencl/ I’m guessing the answer is no, but I’d prefer official confirmation from NVIDIA personnel.

1 Like

Hi cdahms123, OpenCL is not supported on Jetson AGX Xavier.

1 Like

Searching for a way to utilize OpenCL on my Jetson Xavier for MVTec Halcon library I’ve encountered a YouTube instructional video to install OpenCL on Jetson Nano . Alan seems to install some build tools and libraries and then compiles pocl.

Now I’ve followed the instructions and get some interesting output on my Jetson when typing clinfo:

Number of platforms                               1
  Platform Name                                   Portable Computing Language
  Platform Vendor                                 The pocl project
  Platform Version                                OpenCL 1.2 pocl 1.3 Release, LLVM 6.0.0, SLEEF, POCL_DEBUG, FP16
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd
  Platform Extensions function suffix             POCL
  Platform Name                                   Portable Computing Language
Number of devices                                 1
  Device Name                                     pthread-0x004
  Device Vendor                                   0x4e
  Device Vendor ID                                0x13b5
  Device Version                                  OpenCL 1.2 pocl HSTR: pthread-aarch64-unknown-linux-gnu-cortex-a57
  Driver Version                                  1.3
  Device OpenCL C Version                         OpenCL C 1.2 pocl
  Device Type                                     CPU
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               8
  Max clock frequency                             2265MHz
  Device Partition                                (core)
    Max number of sub-devices                     8
    Supported partition types                     equally, by counts
  Max work item dimensions                        3
  Max work item sizes                             4096x4096x4096
  Max work group size                             4096
  Preferred work group size multiple              8
  Preferred / native vector sizes
    char                                                16 / 16
    short                                                8 / 8
    int                                                  4 / 4
    long                                                 2 / 2
    half                                                 8 / 8        (cl_khr_fp16)
    float                                                4 / 4
    double                                               2 / 2        (cl_khr_fp64)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             No
    Round to nearest                              No
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              31321673728 (29.17GiB)
  Error Correction support                        No
  Max memory allocation                           8589934592 (8GiB)
  Unified memory for Host and Device              Yes
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        2097152 (2MiB)
  Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            536870912 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 128
    Max number of write image args                128
  Local memory type                               Global
  Local memory size                               1048576 (1024KiB)
  Max number of constant args                     8
  Max constant buffer size                        1048576 (1024KiB)
  Max size of kernel argument                     1024
  Queue properties
    Out-of-order execution                        No
    Profiling                                     Yes
  Prefer user sync for interop                    Yes
  Profiling timer resolution                      1ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            Yes
  printf() buffer size                            16777216 (16MiB)
  Built-in kernels
  Device Extensions                               **cl_khr_byte_addressable_store** cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_3d_image_writ
es cl_khr_fp16 cl_khr_fp64

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  Portable Computing Language
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [POCL]
  clCreateContext(NULL, ...) [default]            Success [POCL]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 Portable Computing Language
    Device Name                                   pthread-0x004
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  Success (1)
    Platform Name                                 Portable Computing Language
    Device Name                                   pthread-0x004
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 Portable Computing Language
    Device Name                                   pthread-0x004

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.2.11
  ICD loader Profile                              OpenCL 2.1

As @cdahms123 mentioned in a thread he started on the topic the device should be detected as compute device by Halcon under this condition:

At present, HALCON only supports OpenCL compatible GPUs supporting the OpenCL extension cl_khr_byte_addressable_store and image objects. If you are not sure whether a certain device is supported, please refer to the manufacturer.

cl_khr_byte_addressable_store is detected as seen in the clinfo output above.

Now I wonder why the hbench utility still doesn’t detect the GPU. As I’m on the limit of the device in terms of performance I also need all the acceleration I can get …

Do I understand something wrong about the OpenCL support on Jetson or is this “workaround” through pocl not actually working properly?

@dusty_nv Thanks for your great GitHub repos for Jetson ML BTW. They’ve helped me a lot!

You did not compile PoCL with CUDA support. For this, you have to append the “-DENABLE_CUDA=1” flag when calling cmake before compiling.

1 Like

Thanks, Jerome. I’m recompiling PoCL as we speak. Will keep you updated.

I can confirm that it can be compiled with CUDA support, but I’m not sure if everything is correct. At least I don’t see image support. Any ideas?

  Device Name                                     Xavier
  Device Vendor                                   NVIDIA Corporation
  Device Vendor ID                                0x10de
  Device Version                                  OpenCL 1.2 pocl HSTR: CUDA-sm_72
  Driver Version                                  1.6
  Device OpenCL C Version                         OpenCL C 1.2 pocl
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               8
  Max clock frequency                             1377MHz
  Device Partition                                (core)
Max number of sub-devices                     1
Supported partition types                     None
Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x64
  Max work group size                             1024
  Preferred work group size multiple              32
  Preferred / native vector sizes
char                                                 1 / 1
short                                                1 / 1
int                                                  1 / 1
long                                                 1 / 1
half                                                 0 / 0        (n/a)
float                                                1 / 1
double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
Denormals                                     Yes
Infinity and NANs                             Yes
Round to nearest                              Yes
Round to zero                                 Yes
Round to infinity                             Yes
IEEE754-2008 fused multiply-add               Yes
Support is emulated in software               No
Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (cl_khr_fp64)
Denormals                                     Yes
Infinity and NANs                             Yes
Round to nearest                              Yes
Round to zero                                 Yes
Round to infinity                             Yes
IEEE754-2008 fused multiply-add               Yes
Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              33477574656 (31.18GiB)
  Error Correction support                        No
  Max memory allocation                           8369393664 (7.795GiB)
  Unified memory for Host and Device              Yes
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       4096 bits (512 bytes)
  Global Memory cache type                        None
  Image support                                   No
  Local memory type                               Local
  Local memory size                               49152 (48KiB)
  Max number of constant args                     8
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     1024
  Queue properties
Out-of-order execution                        No
Profiling                                     Yes
  Prefer user sync for interop                    Yes
  Profiling timer resolution                      1ns
  Execution capabilities
Run OpenCL kernels                            Yes
Run native kernels                            No
  printf() buffer size                            16777216 (16MiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_int64_base_atomics cl_khr_int64_extended_atomics
1 Like

Here are clpeak results:

Platform: Portable Computing Language
  Device: Xavier
    Driver version  : 1.6 (Linux ARM64)
    Compute units   : 8
    Clock frequency : 1377 MHz

    Global memory bandwidth (GBPS)
      float   : 84.52
      float2  : 107.46
      float4  : 106.80
      float8  : 107.15
      float16 : 105.47

    Single-precision compute (GFLOPS)
      float   : 1355.57
      float2  : 1403.25
      float4  : 1398.78
      float8  : 1394.55
      float16 : 1384.85

    No half precision support! Skipped

    Double-precision compute (GFLOPS)
      double   : 44.03
      double2  : 43.96
      double4  : 43.85
      double8  : 43.57
      double16 : 43.16

    Integer compute (GIOPS)
      int   : 1367.98
      int2  : 1400.67
      int4  : 1391.98
      int8  : 1399.31
      int16 : 1398.18

    Integer compute Fast 24bit (GIOPS)
      int   : 1367.96
      int2  : 1400.73
      int4  : 1392.01
      int8  : 1399.45
      int16 : 1398.25

    Transfer bandwidth (GBPS)
      enqueueWriteBuffer              : 8.07
      enqueueReadBuffer               : 8.22
      enqueueWriteBuffer non-blocking : 8.29
      enqueueReadBuffer non-blocking  : 8.28
      enqueueMapBuffer(for read)      : 23585.76
        memcpy from mapped ptr        : 8.39
      enqueueUnmap(after write)       : 13.49
        memcpy to mapped ptr          : 8.38

    Kernel launch latency : -30.71 us
1 Like

Managed to compile it with llvm-11 the trick is to build it with:

cmake -DCMAKE_BUILD_TYPE=Release -DWITH_LLVM_CONFIG=/usr/lib/llvm-11/bin/llvm-config -DSINGLE_LLVM_LIB=1 -DENABLE_CUDA=ON -DSTATIC_LLVM=ON ..

llvm-11 can be had from the llvm site. It has a download for ARM64 and has support for Carmel CPU of the AGX.

the CPU benchmark isn’t too shabby either:

Platform: Portable Computing Language
  Device: pthread-0x004
    Driver version  : 1.7-pre master-0-g89af801e (Linux ARM64)
    Compute units   : 8
    Clock frequency : 2265 MHz

Global memory bandwidth (GBPS)
  float   : 14.73
  float2  : 23.17
  float4  : 21.41
  float8  : 18.91
  float16 : 16.47

Single-precision compute (GFLOPS)
  float   : 4.34
  float2  : 8.56
  float4  : 17.37
  float8  : 33.49
  float16 : 67.40

Half-precision compute (GFLOPS)
  half   : 4.36
  half2  : 8.76
  half4  : 16.92
  half8  : 34.29
  half16 : 67.78

Double-precision compute (GFLOPS)
  double   : 4.27
  double2  : 8.37
  double4  : 17.32
  double8  : 34.20
  double16 : 59.22

Integer compute (GIOPS)
  int   : 8.77
  int2  : 23.35
  int4  : 46.31
  int8  : 86.20
  int16 : 128.71

Integer compute Fast 24bit (GIOPS)
  int   : 8.77
  int2  : 23.23
  int4  : 46.33
  int8  : 86.69
  int16 : 131.30

Transfer bandwidth (GBPS)
  enqueueWriteBuffer              : 10.60
  enqueueReadBuffer               : 10.42
  enqueueWriteBuffer non-blocking : 10.61
  enqueueReadBuffer non-blocking  : 10.60
  enqueueMapBuffer(for read)      : 6.73
    memcpy from mapped ptr        : 10.29
  enqueueUnmap(after write)       : 10.22
    memcpy to mapped ptr          : 10.22

Kernel launch latency : 68.42 us

@janrinze

Thanks for sharing. I also looked into this yesterday and downloaded the LLVM11 aarch64 tarball after seeing this would support carmel, but unsure how to install it and what it implies for further apt (stiil using LLVM6) update. May you share some install info ?
Also, is the carmel arch auto-detected, so that we don’t need to set LLC_HOST_CPU nor any further detail ? No warning at link ?

The download is from https://releases.llvm.org/download.html
steps:

  • Unpack the file in a new directory and move the files
  • mkdir temp
  • cd temp
  • tar xf ~/Downloads/clang+llvm-11.0.0-aarch64-linux-gnu.tar.xz
  • sudo mkdir -p /usr/lib/llvm-11/
  • sudo mv clang+llvm-11.0.0-aarch64-linux-gnu/* /usr/lib/llvm-11/

That’s all for being able to use it to compile pocl.

1 Like

Still a few things that aren’t quite there yet:

  • the performance of ‘Double-precision’ should be similar to that of ‘Single-precision’.
  • Also ‘half precision’ should be available, it is supported by the GPU.

Possibly something the POCL people can pick up in the near future.

does anyone occur segfault on pocl GPU?

Yes, it happens if I use clpeak. CPU part works properly, but segmentation fault occurs when it starts GPU tests. It was indicated to happen if both CPU and GPU devices are available and a simple solution is to disable CPU device as indicated here: https://github.com/pocl/pocl/issues/853#issuecomment-696367623

Simply add POCL_DEVICES=CUDA before the application that you start.
The POCL library built with LLVM-11 seems to do well. Although i have not tested much more than the clpeak test.
I tried to test some other apps but they don’t seem to work.


the above is forked from a Windows OpenCL example/tutorial but i get error -44 on the opencl kernel program build.

Maybe someone else here knows how to fix that.

Got back to this topic for some reason, and I may share my findings at that time (Xavier NX, last trial was on Dec 2020).

I just then tried a simple example of Sobel filtering from opencv on 1280x720p30. Using POCL built with llvm11.
This may be a side case, not sure it can be further generalized.
These were excluding first 20 frames from measurement on next 500 frames:

  • Opencv CPU cv::Mat (or cv::UMat with POCL basic) : ~25 ms per frame
  • Opencv CUDA GpuMat: 3 ms per frame
  • Opencv UMat with POCL CUDA: 2 ms per frame
  • Opencv UMat with POCL pthreads: 0.5 ms per frame

So the big improvement may be on CPU side with carmel CPU support.

Unable to retry now with R32.5.1, only from what I remind.

Test Code (use with caution, not tested before posting)
#include <signal.h>
#include <iostream>
#include <vector>
#include <CL/cl.h>

#include <opencv2/opencv.hpp>
#include <opencv2/core/ocl.hpp>
#include <opencv2/imgproc.hpp>


#include <opencv2/core/cuda.hpp>
#include <opencv2/cudafilters.hpp>

//#include <opencv2/cudaobjdetect.hpp>
//#include <opencv2/cudaimgproc.hpp>


#define IGNORE_FIRST_FRAMES 20
#define LOOP_MEASURE_FRAMES 500

typedef enum  {
	test_no_opencl_cpu = 0,
	test_opencl_cpu,
	test_opencl_gpu,
	test_cuda,
	test_unknown
} test_case_t;
test_case_t tcase = test_opencl_gpu;




static cv::VideoCapture *capPtr = NULL;
void my_handler(int s){
       std::cerr<< "Caught signal " << s << std::endl;
       if(capPtr) {
           capPtr->release();
           capPtr = NULL;
       }
       exit(s); 
}



void Process_Sobel_CPU(cv::Mat frameBGRin, cv::Mat frameBGRout) {
    cv::Sobel(frameBGRin, frameBGRout, -1, 1, 1, 1, cv::BORDER_DEFAULT);
}

void Process_Sobel_UMat(cv::UMat frameBGRin, cv::UMat frameBGRout) {
    cv::Sobel(frameBGRin, frameBGRout, -1, 1, 1, 1, cv::BORDER_DEFAULT);
}

void Process_Sobel_CUDA(cv::cuda::GpuMat frameBGRin, cv::cuda::GpuMat frameBGRout) {
    static cv::Ptr < cv::cuda::Filter > cuda_Sobel_filter = cv::cuda::createSobelFilter (CV_8UC3, CV_8UC3, 1, 1, 1, 1, cv::BORDER_DEFAULT);
    cuda_Sobel_filter->apply (frameBGRin, frameBGRout);
}


void PrintOclDeviceInfo(cv::ocl::Device dev) {
	std::cout << "\tName: " << dev.name() << std::endl;
	std::cout << "\tType: " << dev.type() << std::endl;
	std::cout << "\tAvailable: " << (dev.available() ? "YES":"NO") << std::endl;
	std::cout << "\tOpenCL version: " << dev.OpenCLVersion() << std::endl;
	std::cout << "\tVendor: " << dev.vendorName() << std::endl;
	std::cout << "\tDriver version: " << dev.driverVersion() << std::endl;
	std::cout << "\tVersion: " << dev.version() << std::endl;
	//std::cout << "\tExtensions: " << dev.extensions() << std::endl;
	std::cout << "\tHost unified memory: " << (dev.hostUnifiedMemory() ? "YES":"NO") << std::endl;
	std::cout << "\tCompiler available: " << (dev.compilerAvailable() ? "YES":"NO") << std::endl;
	std::cout << "\tLinker available: " << (dev.linkerAvailable() ? "YES":"NO") << std::endl;
}

void ShowAllPlatformsInfo() {
  std::vector< cv::ocl::PlatformInfo > platforms_info;
  cv::ocl::getPlatfomsInfo(platforms_info);	
  for (auto platform : platforms_info) {
	std::cout << "Platform: " << platform.name() << " Devices: " << platform.deviceNumber() << std::endl;
        for (unsigned int devIdx = 0; devIdx < platform.deviceNumber(); ++devIdx) {
		std::cout << "   Device: " << devIdx << std::endl;
		cv::ocl::Device dev;
                platform.getDevice(dev, devIdx);
        	PrintOclDeviceInfo(dev);
        }
	std::cout << std::endl;
  }	
}

void DiscoverOpenCLDevices() {
  //ShowAllPlatformsInfo();

  cv::ocl::Context cpu_contexts;
  cpu_contexts.create(cv::ocl::Device::TYPE_CPU);
  std::cout << "CPU devices detected:" << cpu_contexts.ndevices() << std::endl;
  for(unsigned int devIdx = 0; devIdx < cpu_contexts.ndevices(); ++devIdx) {
	cv::ocl::Device dev = cpu_contexts.device(devIdx);
        PrintOclDeviceInfo(dev);
  }

  cv::ocl::Context gpu_contexts;
  gpu_contexts.create(cv::ocl::Device::TYPE_GPU);
  std::cout << "GPU devices detected:" << gpu_contexts.ndevices() << std::endl;
  for(unsigned int devIdx = 0; devIdx < gpu_contexts.ndevices(); ++devIdx) {
	cv::ocl::Device dev = gpu_contexts.device(devIdx);
        PrintOclDeviceInfo(dev);
  }
}


int main (int argc, char **argv)
{
  if (argc > 1) {
     std::cout << "Trying to interpret code " << argv[1] << std::endl;
     unsigned int code = (unsigned int)atoi(argv[1]);
     if (code >= (unsigned int) test_unknown) {
        std::cerr << "Unknown code " << code << std::endl;
	return (-1);
     }

     tcase = (test_case_t) code;
  }

  std::cerr << "Main Starting:  " << std::endl;

  const char *gst =
    "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1 ! "
    "nvvidconv ! video/x-raw, format=BGRx, width=1280, height=720 ! "
    "videoconvert ! video/x-raw, format=BGR ! appsink";
  capPtr = new cv::VideoCapture (gst, cv::CAP_GSTREAMER);
  if (!capPtr || !capPtr->isOpened()) {
        std::cerr << "Failed to open capture. Aborting." << std::endl;
        return (-4);
  }
 
  switch (tcase) {
     case test_no_opencl_cpu:
        cv::ocl::setUseOpenCL(false);
        cv::namedWindow ("FrameOut", cv::WINDOW_AUTOSIZE);
	break;

     case test_opencl_cpu:
  	if (!cv::ocl::haveOpenCL()) {
		std::cerr << "No OpenCL support, aborting" << std::endl;
		return (-2);
	}
	DiscoverOpenCLDevices();
        putenv((char*)"OPENCV_OPENCL_DEVICE=Portable Computing Language:CPU");
        cv::ocl::setUseOpenCL(true);
        cv::namedWindow ("FrameOut", cv::WINDOW_AUTOSIZE | cv::WINDOW_OPENGL);
	break;

     case test_opencl_gpu: 
  	if (!cv::ocl::haveOpenCL()) {
		std::cerr << "No OpenCL support, aborting" << std::endl;
		return (-3);
	}
	DiscoverOpenCLDevices();
        putenv((char*)"OPENCV_OPENCL_DEVICE=Portable Computing Language:GPU");
        cv::ocl::setUseOpenCL(true);
  	cv::namedWindow("FrameOut", cv::WINDOW_AUTOSIZE | cv::WINDOW_OPENGL);
        break;

     case test_cuda:
        cv::ocl::setUseOpenCL(false);
  	cv::namedWindow("FrameOut", cv::WINDOW_AUTOSIZE | cv::WINDOW_OPENGL);
	break;

     default:
        std::cerr << "Unknown mode " << (int)tcase << std::endl;
        return (-4);
  }


  cv::Mat frameBGRin (720, 1280, CV_8UC3);
  cv::Mat frameBGRout (720, 1280, CV_8UC3);

  cv::UMat uframeBGRin (720, 1280, CV_8UC3);
  cv::UMat uframeBGRout (720, 1280, CV_8UC3);

  cv::cuda::GpuMat dframeBGRin (720, 1280, CV_8UC3);
  cv::cuda::GpuMat dframeBGRout (720, 1280, CV_8UC3);



  int nbFrames = 0;
  for ( ; nbFrames < IGNORE_FIRST_FRAMES; ++nbFrames) {
      switch (tcase) {
         case test_no_opencl_cpu:
            if (!capPtr->read(frameBGRin)) {
               std::cerr << "Failed to read frame " << nbFrames << std::endl;
	       capPtr->release();
	       return (-5);
            }
            break;

         case test_opencl_cpu:
         case test_opencl_gpu:
            if (!capPtr->read(uframeBGRin)) {
               std::cerr << "Failed to read frame " << nbFrames << std::endl;
	       capPtr->release();
	       return (-6);
            }
            break;

     	case test_cuda:
            if (!capPtr->read(frameBGRin)) {
               std::cerr << "Failed to read frame " << nbFrames << std::endl;
	       capPtr->release();
	       return (-5);
            }
	    break;

     }
  }

  double startTime = 0.0;
  double waitAndReadUsed_time = 0.0;
  double processUsed_time = 0.0;
  double displayUsed_time = 0.0;

  nbFrames=0;
  startTime = (double)cv::getTickCount();
  for ( ; nbFrames < LOOP_MEASURE_FRAMES; ++nbFrames) {
      double loop_start = cv::getTickCount ();

      /****************************************
       * Wait and read frame
       ***************************************/
      switch (tcase) {
      case test_no_opencl_cpu:
         if (!capPtr->read(frameBGRin)) {
            std::cerr << "Failed to read frame " << nbFrames << std::endl;
	    capPtr->release();
	    return (-6);
         }
         break;

        case test_opencl_cpu:
        case test_opencl_gpu:
            if (!capPtr->read(uframeBGRin)) {
               std::cerr << "Failed to read frame " << nbFrames << std::endl;
	       capPtr->release();
	       return (-6);
            }
            break;

     	case test_cuda:
         if (!capPtr->read(frameBGRin)) {
            std::cerr << "Failed to read frame " << nbFrames << std::endl;
	    capPtr->release();
	    return (-6);
         }
	 dframeBGRin.upload(frameBGRin);
	 break;
      }
      double wait_read = (cv::getTickCount () - loop_start) / cv::getTickFrequency ();
      waitAndReadUsed_time += wait_read;

      
      /****************************************
       * Process frame
       ***************************************/
      double proc_start = cv::getTickCount ();
      double proc_time = 0;
      switch (tcase) {
     	case test_no_opencl_cpu:
		Process_Sobel_CPU(frameBGRin, frameBGRout);
       		break;


     	case test_opencl_cpu:
     	case test_opencl_gpu: 
		Process_Sobel_UMat(uframeBGRin, uframeBGRout);
       		break;


     	case test_cuda:
		Process_Sobel_CUDA(dframeBGRin, dframeBGRout);
 		break;

      }
      proc_time = ((cv::getTickCount () - proc_start) / cv::getTickFrequency () );
      processUsed_time += proc_time;



      /****************************************
       * Display frame
       ***************************************/
      double display_start = cv::getTickCount();
      //std::stringstream ss;
      //ss << "Processing: " << (nbFrames / processUsed_time) << " FPS - Frame: " << nbFrames;
      //cv::putText (frameBGR, ss.str (), cv::Point (30, 30), cv::FONT_HERSHEY_SIMPLEX, 1.0, cv::Scalar (255));
      //cv::imshow ("FrameOut", frameBGR);

      switch (tcase) {
         case test_no_opencl_cpu:
		//cv::imshow ("FrameOut", frameBGRout);
		break;

     	 case test_opencl_cpu:
         case test_opencl_gpu:
      		//cv::imshow ("FrameOut", uframeBGRout);
		break;
        
     	 case test_cuda:
      		//cv::imshow ("FrameOut", dframeBGRout);
		break;
 

     	 default:
		std::cerr << "No display implemented for this case" << std::endl;
		break;
      }

      char c = cv::waitKey (1);
      if (c == 27) {
	break;
      }
      double disp_time = (cv::getTickCount() - display_start) / cv::getTickFrequency ();
      displayUsed_time += disp_time;
  

      double loop_time = (cv::getTickCount() - loop_start) / cv::getTickFrequency ();
      //std::cout << "Frame: " << nbFrames << "   Wait & read: " << wait_read << "   Process time: " << proc_time << "    Display time: " << disp_time << "    Loop:" << loop_time << std::endl;
    
  }

      std::cout << "Terminated\n";
      double totalTime = (cv::getTickCount () - startTime) / cv::getTickFrequency ();
      std::cout << "Total Time for " << nbFrames << " frames: " << totalTime << " s. average: " << 1000.0*totalTime/(double)nbFrames << " ms" << std::endl;

      std::cout << "Wait & read time: " << waitAndReadUsed_time << " s. average: " << 1000.0*waitAndReadUsed_time/(double)nbFrames << " ms\n";
      std::cout << "Process time: " << processUsed_time << " s. average: " << 1000.0*processUsed_time/(double)nbFrames << " ms\n";
      std::cout << "Display time: " << displayUsed_time << " s. average: " << 1000.0*displayUsed_time/(double)nbFrames << " ms\n";

      capPtr->release();
}