understanding images and opecnl

Hi Everyone,

Im recently trying to understand basics of opencl and how opencl based applications/kernels works. I’m stuck on understanding clCreateImage(2|3)D()…

what do we use image_row_pitch variable for? what does it describe??

thanks for help :)

Pitch is a parallel to what is called in DirectX “Stride”. This is the length of a row , in bytes. When creating a buffer you specify it’s width and height and format, yet you also need to specify

how much bytes would a full row consume. This way, the driver would know how much memory to indeed allocate , as buffers, at the end of the day , measure in bytes.

Suppose, for example, you would like to create a buffer 320X200 pixels wide and each pixel would be constructed from an unsigned integer which contains four components R,G,B,A (each one is 8 bits).

The stride (Row pitch) in this case would be 320 X sizeof (cl_uint) , which is 320 X4 = 1280 bytes (for one row). and the size of the whole buffer allocated for your image would be 1280 X 200 (You calculate it , I am not that strong in math).

if , for instance, you would like to create the same buffer for HDR shading , you would create it with an order of RGBA and a type of CL_FLOAT, which means every pixel in this buffer will be represented by a vector of 4 floats (R,G,B,A) and will take 16 bytes (4 X sizeof (CL_FLOAT)) . In this case a row_pitch would be 320 X 16 = 5120 Bytes (I activated my calculator here) and a final buffer size of 5120*200 = 102,400 Bytes For the same buffer. So the row_pitch makes tons of a difference as for how the driver relates to your buffer and how it is relayed to the GPU.

Hope this sheds (Or shades…) some light.

Eyal.

Pitch is a parallel to what is called in DirectX “Stride”. This is the length of a row , in bytes. When creating a buffer you specify it’s width and height and format, yet you also need to specify

how much bytes would a full row consume. This way, the driver would know how much memory to indeed allocate , as buffers, at the end of the day , measure in bytes.

Suppose, for example, you would like to create a buffer 320X200 pixels wide and each pixel would be constructed from an unsigned integer which contains four components R,G,B,A (each one is 8 bits).

The stride (Row pitch) in this case would be 320 X sizeof (cl_uint) , which is 320 X4 = 1280 bytes (for one row). and the size of the whole buffer allocated for your image would be 1280 X 200 (You calculate it , I am not that strong in math).

if , for instance, you would like to create the same buffer for HDR shading , you would create it with an order of RGBA and a type of CL_FLOAT, which means every pixel in this buffer will be represented by a vector of 4 floats (R,G,B,A) and will take 16 bytes (4 X sizeof (CL_FLOAT)) . In this case a row_pitch would be 320 X 16 = 5120 Bytes (I activated my calculator here) and a final buffer size of 5120*200 = 102,400 Bytes For the same buffer. So the row_pitch makes tons of a difference as for how the driver relates to your buffer and how it is relayed to the GPU.

Hope this sheds (Or shades…) some light.

Eyal.

I thought so, but you explained it perfectly