Question regarding cudaCreateChannelDesc

Hi,
I am dealing with a 16-bit grayscale .tiff image. I have read the pixel information into a unsigned short format. While creating the texture can I describe the channel as follows:

cudaCreateChannelDesc(16, 0, 0, 0, cudaChannelFormatKindUnsigned)

Is the above statement valid? I want to create a single channel of 16-bits. If this is not valid can you please suggest the correct way without losing any of the pixel information. Please help me out.

Regards,
Krishna

I believe so, but even easier, you can write [font=“Courier New”]cudaCreateChannelDesc()[/font].

If you want the [font=“Courier New”]tex*()[/font] intrinsics to return [font=“Courier New”]unsigned short[/font], make sure the texture reference declaration specifies [font=“Courier New”]cudaReadModeElementType[/font]:
[font=“Courier New”] texture<unsigned short, 2, cudaReadModeElementType> tex1;[/font]

Otherwise, the runtime will convert the unsigned shorts to unitized floats.

In the C API, does the returned Channel Descriptor need to be free’d? Has a host-side alloc occurred? The API reference does not say.

Thanks.