I have a Quadro 6000. I installed all of the drivers correctly and the card is working well for quite some time now. My boss wants me to improve image quality and so we are revisiting everything that we are doing now. One issue is are we getting the most colors per bit that we can. I am using OpenGL glTexImage3D to display and manipulate a 3D model. I set the parameter to glTexImage3D to GL_UNSIGNED_SHORT which indicates the data type to use for pixel data. Can I assume that I am viewing the 3D model in 30 bits per color format or is there something else I need to do or set? I know that with normal graphics 24 bit color (8 bits per color) is more than the eye can see but with medical imaging having more than 8 bits per color makes a difference. While I am on the subject are there any plans now or in the near future for an NVidia 48 bit color (16 per color) card?
To view 30-bits per color you need a monitor supporting that.
Then you need to explicitly select a 10-bits per component (30-bit color) OpenGL pixelformat via the WGL_ARB_pixelformat extension inside your application. (The Win32 API ChoosePixelformat() won’t work!)
And you need to render with data supporting that higher resolution color range.
Please read through this document which is the first Google hit when searching for “OpenGL 30 bits site:nvidia.com”
The internal texture image format has not much to do with that. Using unsigned short 16-bit fixed point texture data will not automatically result in 30-bits output. Reading from that texture will convert these values into the floating point range of [0.0f, 1.0f] and then work with that inside the shader. The important part is how you render the resulting image to the screen.