Exploring Nvidia OpenCL 195.39 drivers:Bugs (?).. and performance issues.. and lacking extensions..

Hi,
I suppose and I checked Nvidia first (for consumers) 195 OpenCL drivers are very stable, generally fast and as some of you probably know have:

*Double precision
*OpenGL interop

anyway I have found some performance issues, bugs? and other things I want to say

This were uncovered testing AMD samples (more details):
[url=“GPU computing: AMD OpenCL samples on Nvidia 195 OpenCL drivers!!”]http://oscarbg.blogspot.com/2009/11/amd-op...195-opencl.html[/url]

before anything know that perf is equivalent to CUDA and DX compute by now:
Nvidia Nbody demos for CUDA,OCL and DX Compute gets roughly on par at near 500Gflops on an OC GTX 275…
Anyway say 5-10% lowest perf is in OCL demo but probably because isn’t using any graphics interop, CUDA use it and DX compute probably…

Also oclBandwithtest and cudabandwithtest reports are very similar…

Bugs
Note all of this works in AMD implementation:

1

I have found that a kernel without parameters (__kernel void main())
ok toy example but amd uses it in HelloCL sample, returns:

:5: error: a __kernel function cannot have varargs or stdargs
__kernel void

related to uint4 to float4 conversion

a kernel having
temp1 = ((float4)(temp[i])) * one / intMax;
fails
we have to do this for working:
((float4)(temp[i].x,temp[i].y,temp[i].z,temp[i].w))

  1. Related to math functions passing int doesn’t
    find correct function:

:35: error: no matching overload found for arguments of type ‘int, int’
int mask = pow(2, k);
^~~
:45: error: no matching overload found for arguments of type ‘int, int’
output[global_id] = temp / pow(2, 32);

:35: error: no matching overload found for arguments of type ‘int, int’
outputImage[x + y * width] = hypot(Gx,Gy)/2;

FIX:change you parameters to float putting (float)
pow((float)2,(float)k)
hypot((float)Gx,(float)Gy)/2;

I have seen this warning :
:10: warning: unknown ‘#pragma OPENCL EXTENSION’ - ignored
#pragma OPENCL EXTENSION cl_khr_byte_addressable_store : enable
is this correct?

Performance issues:

I have not explored to full extent the 3D Volume texture sample but is one of the remaining samples that goes very slow compared to CUDA…
I remember say 150fps vs 14fps…

I hope that were related to not working OpenGL interop in previous drivers…
I have enabled using GL_INTEROP and creating a OpenGL enabled context:
cl_context_properties akProperties = {
CL_GL_CONTEXT_KHR,
(cl_context_properties)wglGetCurrentContext(),
CL_WGL_HDC_KHR,
(cl_context_properties)wglGetCurrentDC(), 0
};

But the performance remains the same?
Can someone at Nvidia explain where that enormous difference in fps comes?
Testes on Windows 7 …Is residing in WDDM model?

Suggestions and questions

I hope I’m no misunderstanding something…

Two OpenCL examples get an out of resources:

in AMD GPUs works…

1.Mandlebrot do the crazy thing of launch a global group of 65536 threads with 1dimension and with local workgroups of one element…
I have fixed reduced the resolution and gets 16K threads and is working…

Nvidia can support this if it were put in a 2D global group of 256x256… Correct?

I think the limitation is hardware dependant an also exposed in CUDA but can’t Nividia implement within the driver a loop executing as many as many local workgroups as they can in hardware in every step…
Theoretically the relaxation of the CUDA model doesn’t permit this? as threads of different local workgroups have no other communication than finish kernel launches or via atomics…
Also is Fermi going to support that large 1D global groups?..

  1. About shared memory

I have two questions (apply to CUDA also…)

AMD is emulating local mem in 4xxx via global mem ,
well I have a very big slowdown in perf but can Nvidia do that also so programs with are compiled running well on AMD backend run in Nvidia without changes…

The OpenCL driver knows shared memory resources required by the executable and what size GPU is so if greater using global mem
I know that can get complicated emulation for ex if the code using shared memory atomics and mem fences in shared mem
At least is possible?

Fermi helps with unified space?

About lacking extensions

Also Nvidia I’m waiting for 3d image writes extension for enhanced perf in 3d lattice codes but I think it’s all Fermi related so CUDA has no support for it and also DirectCompute 5.0 exposes RWTexture3D to new cards…
Corect?
Also what about 64 bit atomics are they supported in GT2xx cards in CUDA, no?

Thanks tons for this. I’ve been fighting trying to OpenCL interoperability to work (without this magic step you get a CL_INVALID_CONTEXT error whenever you call any of the GL/CL functions). With code like this it works fine:

cl_int			 cl_error;

	cl_device_id device_id;

	cl_context_properties akProperties[] = 

	{ 

		CL_GL_CONTEXT_KHR, (cl_context_properties)wglGetCurrentContext(), 

		CL_WGL_HDC_KHR, (cl_context_properties)wglGetCurrentDC(), 0 

	};

	cl_error = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 1, &device_id, NULL);

		if(cl_error!=CL_SUCCESS)

		  abort();

	cl_contenct ctx=clCreateContext(akProperties,

					1,

					&device_id		  /* devices */,

					NULL,

					NULL,

					&cl_error);

		if(cl_error!=CL_SUCCESS)

		  abort();

The only gotcha I’ve found is that clEnqueueAcquireGLObjects and clEnqueueReleaseGLObjects do not generate valid cl_event objects (which I’m guessing means they are blocking?).

I noticed HUGE regression with the new OpenCL 195.39 and cudaToolkit 3.0
The main example is in the SDK with BoX Muller transformation in Mersenne Twister
With the last driver, the profiler says BoxMuller make 80% of local store whereas now with the new drivers it gives 80% of gst64b…
In fact, the problem of clReleaseMemObject which makes memcpyDtoH for nothing disappears but the previous problem is very important and i have some examples which takes 0.5 sec with the last driver takes 2.0 sec with the new driver due to the problem of Kernel which makes always gst64b…